God's in his heaven.
All's right with the world.

0%

(MSDN) Managing Memory-Mapped Files

Randy KathMicrosoft Developer Network Technology Group

Created: February 9, 1993

Abstract

Determining which function or set of functions to use for managing memory in your application is difficult without a solid understanding of how each group of functions works and the overall impact they each have on the operating system. In an effort to simplify these decisions, this technical article focuses on the use of the memory-mapped file functions in Windows: the functions that are available, the way they are used, and the impact their use has on operating system resources. The following topics are discussed in this article:

  • Introduction to managing memory in Windows operating systems
  • What are memory-mapped files?
  • How are memory-mapped files implemented?
  • Sharing memory with memory-mapped files
  • Using memory-mapped file functions

In addition to this technical article, a sample application called ProcessWalker is included on the Microsoft Developer Network CD. This sample application is useful for exploring the behavior of memory-mapped files in a process, and it provides several useful implementation examples.

Introduction

This is one of three related technical articles—“Managing Virtual Memory,” “Managing Memory-Mapped Files,” and “Managing Heap Memory”—that explain how to manage memory in applications for Windows. In each article, this introduction identifies the basic memory components in the Windows API programming model and indicates which article to reference for specific areas of interest.

The first version of the Windows operating system introduced a method of managing dynamic memory based on a single global heap, which all applications and the system share, and multiple, private local heaps, one for each application. Local and global memory management functions were also provided, offering extended features for this new memory management system. More recently, the Microsoft C run-time (CRT) libraries were modified to include capabilities for managing these heaps in Windows using native CRT functions such as malloc and free. Consequently, developers are now left with a choice—learn the new application programming interface (API) provided as part of Windows or stick to the portable, and typically familiar, CRT functions for managing memory in applications written for Windows.

Window offers three groups of functions for managing memory in applications: memory-mapped file functions, heap memory functions, and virtual-memory functions.

Figure 1. The Windows API provides different levels of memory management for versatility in application programming.

In all, six sets of memory management functions exist in Windows, as shown in Figure 1, all of which were designed to be used independently of one another. So which set of functions should you use? The answer to this question depends greatly on two things: the type of memory management you want and how the functions relevant to it are implemented in the operating system. In other words, are you building a large database application where you plan to manipulate subsets of a large memory structure? Or maybe you’re planning some simple dynamic memory structures, such as linked lists or binary trees? In both cases, you need to know which functions offer the features best suited to your intention and exactly how much of a resource hit occurs when using each function.

Table 1 categorizes the memory management function groups and indicates which of the three technical articles in this series describes each group’s behavior. Each technical article emphasizes the impact these functions have on the system by describing the behavior of the system in response to using the functions.

Table 1. Various Memory Management Functions Available

Memory set System resource affected Related technical article
Virtual memory functions A process’s virtual address space
System pagefile
System memory
Hard disk space
“Managing Virtual Memory”
Memory-mapped file functions A process’s virtual address space
System pagefile
Standard file I/O
System memory
Hard disk space
“Managing Memory-Mapped Files”
Heap memory functions A process’s virtual address space
System memory
Process heap resource structure
“Managing Heap Memory”
Global heap memory functions A process’s heap resource structure “Managing Heap Memory”
Local heap memory functions A process’s heap resource structure “Managing Heap Memory”
C run-time reference library A process’s heap resource structure “Managing Heap Memory”

Each technical article discusses issues surrounding the use of Windows-specific functions.

What Are Memory-Mapped Files?

Memory-mapped files (MMFs) offer a unique memory management feature that allows applications to access files on disk in the same way they access dynamic memory—through pointers. With this capability you can map a view of all or part of a file on disk to a specific range of addresses within your process’s address space. And once that is done, accessing the content of a memory-mapped file is as simple as dereferencing a pointer in the designated range of addresses. So, writing data to a file can be as simple as assigning a value to a dereferenced pointer as in:

1
*pMem = 23;

Similarly, reading from a specific location within the file is simply:

1
nTokenLen = *pMem;

In the above examples, the pointer pMem represents an arbitrary address in the range of addresses that have been mapped to a view of a file. Each time the address is referenced (that is, each time the pointer is dereferenced), the memory-mapped file is the actual memory being addressed.

NoteWhile memory-mapped files offer a way to read and write directly to a file at specific locations, the actual action of reading/writing to the disk is handled at a lower level. Consequently, data is not actually transferred at the time the above instructions are executed. Instead, much of the file input/output (I/O) is cached to improve general system performance. You can override this behavior and force the system to perform disk transactions immediately by using the memory-mapped file function FlushViewOfFile explained later.

What Do Memory-Mapped Files Have to Offer?

One advantage to using MMF I/O is that the system performs all data transfers for it in 4K pages of data. Internally all pages of memory are managed by the virtual-memory manager (VMM). It decides when a page should be paged to disk, which pages are to be freed for use by other applications, and how many pages each application can have out of the entire allotment of physical memory. Since the VMM performs all disk I/O in the same manner—reading or writing memory one page at a time—it has been optimized to make it as fast as possible. Limiting the disk read and write instructions to sequences of 4K pages means that several smaller reads or writes are effectively cached into one larger operation, reducing the number of times the hard disk read/write head moves. Reading and writing pages of memory at a time is sometimes referred to as paging and is common to virtual-memory management operating systems.

Another advantage to using MMF I/O is that all of the actual I/O interaction now occurs in RAM in the form of standard memory addressing. Meanwhile, disk paging occurs periodically in the background, transparent to the application. While no gain in performance is observed when using MMFs for simply reading a file into RAM, other disk transactions can benefit immensely. Say, for example, an application implements a flat-file database file structure, where the database consists of hundreds of sequential records. Accessing a record within the file is simply a matter of determining the record’s location (a byte offset within the file) and reading the data from the file. Then, for every update, the record must be written to the file in order to save the change. For larger records, it may be advantageous to read only part of the record into memory at a time as needed. Unfortunately, though, each time a new part of the record is needed, another file read is required. The MMF approach works a little differently. When the record is first accessed, the entire 4K page(s) of memory containing the record is read into memory. All subsequent accesses to that record deal directly with the page(s) of memory in RAM. No disk I/O is required or enforced until the file is later closed or flushed.

NoteDuring normal system paging operations, memory-mapped files can be updated periodically. If the system needs a page of memory that is occupied by a page representing a memory-mapped file, it may free the page for use by another application. If the page was dirty at the time it was needed, the act of writing the data to disk will automatically update the file at that time. (A dirty page is a page of data that has been written to, but not saved to, disk; for more information on types of virtual-memory pages, see “The Virtual-Memory Manager in Windows NT” on the Developer Network CD.)

The flat-file database application example is useful in pointing out another advantage of using memory-mapped files. MMFs provide a mechanism to map portions of a file into memory as needed. This means that applications now have a way of getting to a small segment of data in an extremely large file without having to read the entire file into memory first. Using the above example of a large flat-file database, consider a database file housing 1,000,000 records of 125 bytes each. The file size necessary to store this database would be 1,000,000 * 125 = 125,000,000 bytes. To read a file that large would require an extremely large amount of memory. With MMFs, the entire file can be opened (but at this point no memory is required for reading the file) and a view (portion) of the file can be mapped to a range of addresses. Then, as mentioned above, each page in the view is read into memory only when addresses within the page are accessed.

How Are They Implemented?

Since Windows NT is a page-based virtual-memory system, memory-mapped files represent little more than an extension of an existing, internal memory management component. Essentially all applications in Windows NT are represented in their entirety by one or more files on disk and a subset of those files resident in random access memory (RAM) at any given time. For example, each application has an executable file that represents pages of executable code and resources for the application. These pages are swapped into and out of RAM, as they are needed, by the operating system. When a page of memory is no longer needed, the operating system relinquishes control over the page on behalf of the application that owns it and frees it for use by another. When that page becomes needed again, it is re-read from the executable file on disk. This is called backing the memory with a file, in this case, the executable file. Similarly, when a process starts, pages of memory are used to store static and dynamic data for that application. Once committed, these pages are backed by the system pagefile, similar to the way the executable file is used to back the pages of code. Figure 2 is a graphical representation of how pages of code and data are backed on the hard disk.

Figure 2. Memory used to represent pages of code in processes for Windows NT are backed directly by the application’s executable module while memory used for pages of data are backed by the system pagefile.

Treating both code and data in the same manner paves the way for propagating this functionality to a level where applications can use it, too—which is what Windows does through memory-mapped files.

Shared Memory in Windows NT

Both code and data are treated the same way in Windows NT—both are represented by pages of memory and both have their pages backed by a file on disk. The only real difference is the file by which they are backed—code by the executable image and data by the system pagefile. Because of this, memory-mapped files are also able to provide a mechanism for sharing data between processes. By extending the memory-mapped file capability to include portions of the system pagefile, applications are able to share data that is backed by the pagefile. Shown in Figure 3, each application simply maps a view of the same portion of the pagefile, making the same pages of memory available to each application.

Figure 3. Processes share memory by mapping independent views of a common region in the system pagefile.

Windows NT’s tight security system prevents processes from directly sharing information among each other, but MMFs provide a mechanism that works with the security system. In order for one process to share data with another via MMFs, each process must have common access to the file. This is achieved by giving the MMF object a name that both processes use to open the file.

Internally, a shared section of the pagefile translates into pages of memory that are addressable by more than one process. To do this, Windows NT uses an internal resource called a prototype page-table entry (PPTE). PPTEs enable more than one process to address the same physical page of memory. A PPTE is a system resource, so their availability and security is controlled by the system alone. This way processes can share data and still exist on a secure operating system. Figure 4 indicates how PPTEs are used in Windows NT’s virtual addressing scheme.

Figure 4. Prototype page-table entries are the mechanism that permits pages of memory to be shared among processes.

One of the best ways to use an MMF for sharing data is to use it in a DLL (dynamic-link library). The PortTool application serves as a useful illustration. PortTool uses a DLL to provide its porting functionality and relies on the main application for the user interface. The reason for this is simple: Other applications can then also use the DLL functionality. That is, other editors that are programmable can import the porting functionality. Because it is entirely feasible for PortTool to be running while another editor that imports the PortTool DLL is also running, it is best to economize system resources as much as possible between the applications. PortTool does this by using an MMF for sharing the porting information with both processes. Otherwise, both applications would be required to load their own set of porting information while running at the same time, a waste of system resources. The PortTool code demonstrates sharing memory via an MMF in a DLL.

Using Memory-Mapped File Functions

Memory-mapped file functions can be thought of as second cousins to the virtual-memory management functions in Windows. Like the virtual-memory functions, these functions directly affect a process’s address space and pages of physical memory. No overhead is required to manage the file views, other than the basic virtual-memory management that exists for all processes. These functions deal in reserved pages of memory and committed addresses in a process. The entire set of memory-mapped file functions are:

  • CreateFileMapping
  • OpenFileMapping
  • MapViewOfFile
  • MapViewOfFileEx
  • UnmapViewOfFile
  • FlushViewOfFile
  • CloseHandle

Each of these functions is individually discussed below, along with code examples that demonstrate their use.

Creating a File Mapping

To use a memory-mapped file, you start by creating a memory-mapped file object. The act of creating an MMF object has very little impact on system resources. It does not affect your process’s address space, and no virtual memory is allocated for the object (other than for the internal resources that are necessary in representing the object). One exception, however, is that, if the MMF object represents shared memory, an adequate portion of the system pagefile is reserved for use by the MMF during the creation of the object.

The CreateFileMapping function is used to create the file-mapping object as demonstrated in the example listed below, a portion of PMEM.C, the source module from the ProcessWalker sample application.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
case IDM_MMFCREATENEW:
{
char szTmpFile[256];

/* Create temporary file for mapping. */
GetTempPath (256, szTmpFile);
GetTempFileName (szTmpFile,
"PW",
0,
MMFiles[wParam-IDM_MMFCREATE].szMMFile);

/* If file created, continue to map file. */
if ((MMFiles[wParam-IDM_MMFCREATE].hFile =
CreateFile (MMFiles[wParam-IDM_MMFCREATE].szMMFile,
GENERIC_WRITE | GENERIC_READ,
FILE_SHARE_WRITE,
NULL,
CREATE_ALWAYS,
FILE_ATTRIBUTE_TEMPORARY,
NULL)) != (HANDLE)INVALID_HANDLE_VALUE)
goto MAP_FILE;
}
break;

case IDM_MMFCREATEEXIST:
{
char szFilePath[MAX_PATH];
OFSTRUCT of;

/* Get existing filename for mapfile. */
*szFilePath = 0;
if (!GetFileName (hWnd, szFilePath, "*"))
break;

/* If file opened, continue to map file. */
if ((MMFiles[wParam-IDM_MMFCREATE].hFile =
(HANDLE)OpenFile (szFilePath, &of, OF_READWRITE)) !=
(HANDLE)HFILE_ERROR)
goto MAP_FILE;
}
break;

case IDM_MMFCREATE:
/* Associate shared memory file handle value. */
MMFiles[wParam-IDM_MMFCREATE].hFile = (HANDLE)0xffffffff;

MAP_FILE:
/* Create 20MB file mapping. */
if (!(MMFiles[wParam-IDM_MMFCREATE].hMMFile =
CreateFileMapping (MMFiles[wParam-IDM_MMFCREATE].hFile,
NULL,
PAGE_READWRITE,
0,
0x01400000,
NULL)))
{
ReportError (hWnd);
if (MMFiles[wParam-IDM_MMFCREATE].hFile)
{
CloseHandle (MMFiles[wParam-IDM_MMFCREATE].hFile);
MMFiles[wParam-IDM_MMFCREATE].hFile = NULL;
}
}
break; /* from IDM_MMFCREATE */

In the sample code above, three cases are demonstrated. They represent creating a memory-mapped file by first creating a temporary disk file, creating a memory-mapped file from an existing file, and creating a memory-mapped file out of part of the system pagefile. In case IDM_MMFCREATENEW, a temporary file is created first, before the memory-mapped file. For case IDM_MMFCREATEEXIST, the File Open dialog is used to retrieve a filename, and that file is then opened before the memory-mapped file is created. In the third case, IDM_MMFCREATE, the memory-mapped file is created either using the system pagefile or using one of the standard files created in the two earlier cases.

Notice that the CreateFileMapping function need only be called once for all three different cases. The first parameter to the CreateFileMapping function, hFile, is used to supply the handle to the file that is to be memory-mapped. If the system pagefile is to be used, the value 0xFFFFFFFF must be specified instead. In the above examples, a structure is used to represent both the standard file and memory-mapped file information. In the example above, the hMMFile field in the structure MMFiles[wParam-IDM_MMFCREATE] is either 0xFFFFFFFF (its default value), or it is the value of the file handle retrieved in either of the earlier cases.

In all three cases, the memory-mapped file is specified to be 20 MB (0x01400000) in size, regardless of the size of any files created or opened for mapping. The fourth and fifth parameters, dwMaximumSizeHigh and dwMaximumSizeLow, are used to indicate the size of the file mapping. If these parameters indicate a specific size for the memory-mapped file when memory mapping a file other than the pagefile, the file on disk is fitted to this new size—whether larger or smaller makes no difference. As an alternative, when memory mapping a file on disk, you can set the size parameters to 0. In this case, the memory-mapped file will be the same size as the original disk file. When mapping a section of the pagefile, you must specify the size of the memory-mapped file.

The second parameter to the CreateFileMapping function, lpsa, is used to supply a pointer to a SECURITY_ATTRIBUTES structure. Since memory-mapped files are an object, they have the same security attributes that can be applied to every other object. A NULL value indicates that no security attributes are relevant to your use of the memory-mapped file.

The third parameter, fdwProtect, is used to indicate the type of protection to place on the entire memory-mapped file. You can use this parameter to protect the memory-mapped file from writes by specifying PAGE_READONLY or to permit read and write access with PAGE_READWRITE.

One other parameter of interest is the lpszMapName parameter, which can be used to give the MMF object a name. In order to open a handle to an existing file-mapping object, the object must be named. All that is required of the name is a simple string that is not already being used to identify another object in the system.

Obtaining a File-Mapping Object Handle

In order to map a view of a memory-mapped file, all you need is a valid handle to the MMF object. You can obtain a valid handle in one of several ways: by creating the object as described above, by opening the object with the OpenFileMapping function, by inheriting the object handle, or by duplicating the handle.

Opening a memory-mapped file object

To open a file-mapping object, the object must have been given a name during the creation of the object. A name uniquely identifies the object to this and other processes that wish to share the MMF object. The following portion of code from PORT.C shows how to open a file-mapping object by name.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/* Load name for file-mapping object. */
LoadString (hDLL, IDS_MAPFILENAME, szMapFileName, MAX_PATH);

/* After first process initializes, port data. */
if ((hMMFile = OpenFileMapping (FILE_MAP_WRITE,
FALSE,
szMapFileName)))
/* Exit now since initialization was already performed by
another process. */
return TRUE;

/* Retrieve path and file for ini file. */
if (!GetIniFile (hDLL, szIniFilePath))
return FALSE;

/* Test for ini file existence and get length of file. */
if ((int)(hFile = (HANDLE)OpenFile (szIniFilePath,
&of,
OF_READ)) == -1)
return FALSE;

else
{
nFileSize = GetFileSize (hFile, NULL);
CloseHandle (hFile);
}

/* Allocate a segment of the swap file for shared memory 2*Size
of ini file. */
if (!(hMMFile = CreateFileMapping ((HANDLE)0xFFFFFFFF,
NULL,
PAGE_READWRITE,
0,
nFileSize * 2,
szMapFileName)))
return FALSE;

The OpenFileMapping function requires only three arguments, the most important of these being the name of the object. As shown in the example, the name is simply a unique string. If the string is not unique to the system, the MMF object will not be created. Once the object exists, however, the name is guaranteed for the life of the object.

Also, note in the above example that the MMF object is opened first, possibly before the object has been created. This logic relies on the fact that, if the object does not already exist, the OpenFileMapping function will fail. This is useful in a DLL where the DLL’s initialization code is called repeatedly, once for every process that attaches to it.

The sample from PORT.C above occurs in the DLL’s initialization code that is called every time a DLL gets attached to another process. The first time it is called, the OpenFileMapping function fails because the object does not already exist. The logic, then, continues execution until it reaches the CreateFileMapping function, and it is there that the object is first created. Immediately after initially creating the object, the PortTool code initializes the data in the file mapping by writing porting-specific information to the memory-mapped file. To do this, the memory-mapped file is created with PAGE_READWRITE protection. All subsequent calls to the DLL’s initialization function result in the OpenFileMapping function successfully returning a valid object handle. This way the DLL does not need to keep track of which process is the first to attach to the DLL.

Note that for every process that attaches to the DLL, the object name is retrieved from the same source—a string from the DLL’s resource string table. Since the DLL is able to retrieve the object name from its own resource string table, the name is global to all processes, yet no process is actually aware of the name used. The DLL is able to effectively encapsulate this functionality while at the same time providing the benefit of shared memory to each process that attaches to the DLL.

The PortTool example presents a useful context for sharing memory. Yet, keep in mind that any file on disk could have been used in the same way. If an application were to implement some database services to several other applications, it could set up memory-mapped files using basic disk files, instead of the pagefile, and share that information in the same way. And as the first code listing illustrates, a temporary file could be used to share data instead of the pagefile.

Inheriting and duplicating memory-mapped file object handles

Ordinarily, for two processes to share a memory-mapped file, they must both be able to identify it by name. An exception to this is child processes, which can inherit their parent’s handles. Most objects in Windows can be explicitly targeted for inheritance or not. (Some objects are not inheritable, such as GDI object handles.) When creating an MMF object, a Boolean field in the optional SECURITY_ATTRIBUTES structure can be used to designate whether the handle is to be inheritable or not. If the MMF object handle is designated as inheritable, any child processes of the process that created the object can access the object through the same handle as their parent.

Literally, this means the child process can access the object by supplying the same handle value as the parent. Communicating that handle to the child process is another concern. The child process is still another process after all, having its own address space, so the handle variable itself is not transferable. Either some interprocess communication (IPC) mechanism or the command line can be used to communicate handle values to child processes.

Further, the DuplicateHandle function is provided to offer more control as to when handles can be inherited and not. This function can be used to create a duplicate handle of the original and can be used to change the inheritance state of the handle. An application can invoke this function to change an MMF object handle state to inheritable before passing the handle along to a child process, or it can do the opposite—it can take an inheritable handle and preserve it from being inherited.

Viewing Part of a Memory-Mapped File

Once obtained, the handle to the memory-mapped file object is used to map views of the file to your process’s address space. Views can be mapped and unmapped at will while the MMF object exists. When a view of the file is mapped, system resources are finally allocated. A contiguous range of addresses, large enough to span the size of the file view, are now committed in your process’s address space. Yet, even though the addresses have been committed for the file view, physical pages of memory are still only committed on a demand basis when using the memory. So, the only way to allocate a page of physical memory for a committed page of addresses in your memory-mapped file view is to generate a page fault for that page. This is done automatically the first time you read or write to any address in the page of memory.

To map a view of a memory-mapped file, use either the MapViewOfFile or the MapViewOfFileEx function. With both of these functions, a handle to a memory-mapped file object is a required parameter. The following example shows how the PortTool sample application implements this function.

1
2
3
4
5
6
/* Map a view of this file for writing. */
lpMMFile = (char *)MapViewOfFile (hMMFile,
FILE_MAP_WRITE,
0,
0,
0);

In this example, the entire file is mapped, so the final three parameters are less meaningful. The first parameter specifies the file-mapping object handle. The second parameter indicates the access mode for the view of the file. This can be FILE_MAP_READ, FILE_MAP_WRITE, or FILE_MAP_ALL_ACCESS, provided the protection on the file-mapping object permits it. If the object is created with PAGE_READWRITE protection, all of these access types are available. If, on the other hand, the file is created with PAGE_READONLY protection, the only access type available is FILE_MAP_READ. This allows the object creator control over how the object can be viewed.

The second and third parameters are used to indicate the low and high halves, respectively, of a 64-bit offset into the memory-mapped file. This offset from the start of the memory-mapped file is where the view is to begin. The final parameter indicates how much of the file is to be viewed. This parameter can be set to 0 to indicate that the entire file is to be mapped. In that case, the 64-bit offset value is ignored.

The function returns a pointer to the location in the process’s address space where the file view has been mapped. This is an arbitrary location in your process, depending on where the contiguous range of addresses are available. If you want to map the file view to a specific set of addresses in your process, the MapViewOfFileEx function provides this capability. This function simply adds an additional parameter, lpvBase, to indicate the location in your process to map the view. The return value to MapViewOfFileEx is the same value as lpvBase if the function is successful; otherwise, it is NULL. Similarly, for MapViewOfFile the return value is NULL if the function fails.

Multiple views of the same file-mapping object can coexist and overlap each other as shown in Figure 5.

Figure 5. Memory-mapped file objects permit multiple, overlapped views of the file from one or more processes at the same time.

Notice that multiple views of a memory-mapped file can overlap, regardless of what process maps them. In a single process with overlapping views, you simply end up with two or more virtual addresses in a process that refer to the same location in physical memory. So, it’s possible to have several PTEs referencing the same page frame. Remember, each page of a shared memory-mapped file is represented by only one physical page of memory. To view that page of memory, a process needs a page directory entry and page-table entry to reference the page frame.

There are two ways in which needing only one physical page of memory for a shared page benefits applications in the system. First, there is an obvious savings of resources because both processes share both the physical page of memory and the page of hard disk storage used to back the memory-mapped file. Second, there is only one set of data, so all views are always coherent with one another. This means that changes made to a page in the memory-mapped file via one process’s view are automatically reflected in a common view of the memory-mapped file in another process. Essentially, Windows NT is not required to do any special bookkeeping to ensure the integrity of data to both applications.

Unmapping a View of a Memory-Mapped File

Once a view of the memory-mapped file has been mapped, the view can be unmapped at any time by calling the UnmapViewOfFile function. As you can see below, there is nothing tricky about this function. Simply supply the one parameter that indicates the base address, where the view of the file begins in your process

1
2
3
4
5
6
7
8
9
10
11
12
13
/* Load tokens for APIS section. */
LoadString (hDLL, IDS_PORTAPIS, szSection, MAX_PATH);
if (!LoadSection (szIniFilePath,
szSection,
PT_APIS,
&nOffset,
lpMMFile))
{
/* Clean up memory-mapped file. */
UnmapViewOfFile (lpMMFile);
CloseHandle (hMMFile);
return FALSE;
}

As mentioned above, you can have multiple views of the same memory-mapped file, and they can overlap. But what about mapping two identical views of the same memory-mapped file? After learning how to unmap a view of a file, you could come to the conclusion that it would not be possible to have two identical views in a single process because their base address would be the same, and you wouldn’t be able to distinguish between them. This is not true. Remember that the base address returned by either the MapViewOfFile or the MapViewOfFileEx function is not the base address of the file view. Rather, it is the base address in your process where the view begins. So mapping two identical views of the same memory-mapped file will produce two views having different base addresses, but nonetheless identical views of the same portion of the memory-mapped file.

The point of this little exercise is to emphasize that every view of a single memory-mapped file object is always mapped to a unique range of addresses in the process. The base address will be different for each view. For that reason the base address of a mapped view is all that is required to unmap the view.

Flushing Views of Files

An important feature for memory-mapped files is the ability to write any changes to disk immediately if necessary. This feature is provided through the FlushViewOfFile function. Changes made to a memory-mapped file through a view of the file, other than the system pagefile, are automatically written to disk when the view is unmapped or when the file-mapping object is deleted. Yet, if an application needs to force the changes to be written immediately, FlushViewOfFile can be used for that purpose.

1
2
/* Force changes to disk immediately. */
FlushViewOfFile (lpMMFile, nMMFileSize);

The example listed above flushes an entire file view to disk. In doing so, the system only writes the dirty pages to disk. Since the Windows NT virtual-memory manager automatically tracks changes made to pages, it is a simple matter for it to enumerate all dirty pages in a range of addresses, writing them to disk. The range of addresses is formed by taking the base address of the file view supplied by the first parameter to the FlushViewOfFile function as the starting point and extending to the size supplied by the second parameter, cbFlush. The only requirement is that the range be within the bounds of a single file view.

Releasing a Memory-Mapped File

Like most other objects, a memory-mapped file object is closed by calling the CloseHandle function. It is not necessary to unmap all views of the memory-mapped file before closing the object. As mentioned above, dirty pages are written to disk before the object is freed. To close a memory-mapped file, call the CloseHandle function, which supplies the memory-mapped file object handle for the function parameter.

1
2
/* Close memory-mapped file. */
CloseHandle (hMMFile);

It is worth noting that closing a memory-mapped file does nothing more than free the object. If the memory-mapped file represents a file on disk, the file must still be closed using standard file I/O functions. Also, if you create a temporary file explicitly for use as a memory-mapped file as in the initial ProcessWalker example, you are responsible for removing the temporary file yourself. To illustrate what the entire cleanup process may look like, consider the following example from the ProcessWalker sample application.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
case IDM_MMFFREE:
case IDM_MMFFREENEW:
case IDM_MMFFREEEXIST:
{
HCURSOR hOldCursor;
OFSTRUCT of;

/* Put hourglass cursor up. */
hOldCursor = (HCURSOR)SetClassLong (hWnd, GCL_HCURSOR, 0);
SetCursor (LoadCursor (0, IDC_WAIT));

/* Release memory-mapped file and associated file if any. */
CloseHandle (MMFiles[wParam-IDM_MMFFREE].hMMFile);
MMFiles[wParam-IDM_MMFFREE].hMMFile = NULL;

if (MMFiles[wParam-IDM_MMFFREE].hFile)
{
CloseHandle (MMFiles[wParam-IDM_MMFFREE].hFile);
MMFiles[wParam-IDM_MMFFREE].hFile = NULL;
}

/* If temporary file, delete here. */
if (wParam == IDM_MMFFREENEW)
{
OpenFile (MMFiles[wParam-IDM_MMFFREE].szMMFile,
&of,
OF_DELETE);
*(MMFiles[wParam-IDM_MMFFREE].szMMFile) = 0;
}

/* Replace wait cursor with old cursor. */
SetClassLong (hWnd, GCL_HCURSOR, (LONG)hOldCursor);
SetCursor (hOldCursor);
}
break;

In this example, the memory-mapped file can be one of three types: the system pagefile, a temporary file, or an existing file on disk. If the file is the system pagefile, the memory-mapped file object is simply closed, and no additional cleanup is necessary. If the memory-mapped file is mapped from an existing file, that file is closed right after closing the memory-mapped file. If the memory-mapped file is a mapping of a temporary file, it is no longer needed and is deleted using standard file I/O immediately after closing the temporary file handle, which cannot occur until after closing the memory-mapped file object handle.

Conclusion

Memory-mapped files provide unique methods for managing memory in the Windows application programming interface. They permit an application to map its virtual address space directly to a file on disk. Once a file has been memory-mapped, accessing its content is reduced to dereferencing a pointer.

A memory-mapped file can also be mapped by more than one application simultaneously. This represents the only mechanism for two or more processes to directly share data in Windows NT. With memory-mapped files, processes can map a common file or portion of a file to unique locations in their own address space. This technique preserves the integrity of private address spaces for all processes in Windows NT.

Memory-mapped files are also useful for manipulating large files. Since creating a memory mapping file consumes few physical resources, extremely large files can be opened by a process and have little impact on the system. Then, smaller portions of the file called “views” can be mapped into the process’s address space just before performing I/O.

There are many techniques for managing memory in applications for Windows. Whether you need the benefits of memory sharing or simply wish to manage virtual memory backed by a file on disk, memory-mapped file functions offer the support you need.


Article link: http://xnerv.wang/msdn-managing-memory-mapped-files/
Reprinted from: (MSDN) Managing Memory-Mapped Files