Run dos program high memory
How did retro computers achieve this? Is it substantially different from the modern approach if so how? The method depends on whether you have the address space or not, regardless of the RAM limitation. If you already have a bit address space, but simply not much RAM, then the answer is virtual memory.
However, given the statement of having 2 16 words of RAM, we will assume we do have only a bit address space. Here, the concepts here are:. These will allow you to keep your resident code to a minimum. It would even duplicate small functions in the call tree into different overlay segments so that it would not have to switch overlays.
Such mechanisms preclude function pointer comparison, but that was not really an issue with the languages back then. Fundamentally, overlays keep a common base portion of code, and swap out the remainder as needed.
They must use an some additional state usually another word for holding return address on the stack; the mechanism may have to perform a segment swap on return as well as on call.
However, no way overlays will get you to 2 32 sizes; they merely condense the code section so you have some more room for data within your existing bit address space. Using microcode to offer an extend addressing of bit pointers, it would map bit addresses into the last of several pages in the bit address space. So, the bit address space memory would look something like this:. The code was as much as needed by the maximum size of the compressed overlays.
The data for global fixed data. The 2 pages ensured that after a mapping at least 1k was accessible. After using the mapping instructions, then you could write code that would transfer or compute — all using bit addressing — between regular "data So, at the cost of 4 pages 4k of bit address space, you could access 2 32 bytes using the mappings. Oh, and also the cost of inserting a mapping instruction before a bit address space load or store. And further of maintaining bit pointers, but you would have those sizes anyway with any other scheme supporting larger addressing.
I don't recall how virtual memory applied but I believe that would have been included along with the mappings, to access even more memory than both address space and real memory as well. Typically, if your program needs 2 32 bytes, most of that will be data. Other computer systems have also had extended pointers, e.
They used segment registers to complement bit pointers. You could use them in 2 different ways. The first is to hold the segment registers fixed, with one pointing to code, one to data, one to heap, and one to stack. You could then have 64k of code and maybe 64k the rest; or maybe 64k of each global data, heap, and stack, but that compromised the runtime model, since the language would have to simply know a priori where a bit pointer referred easy with code vs.
Usually you could get k programs out of that arrangement, sometimes a bit more. Uiuiui - according to the rules this question should be out of scope as too broad , but lets give it a try. The very first didn't even bother to distingush much between 'slow' and fast memory, as the 'slow' device was it. In fact, many early machines didn't have direct accessible external storage at all. The first thing that got rare wasn't programm space but data RAM. Keep in mind, a punch card is already 80 bytes, and a mere of them kept in memory is about 8 KiB.
Quite a lot for early computers, resulting in a need to handle them on some fast way without enough memory. Data too large to be handled at once would be moved in and out under program controll.
Either as records or blocks. Ofc, even back then there where tasks to complicated to fit into available memory. There are baisicly two methods to handle this:. Both can be used for data or programs, although chaining is usually more associated with programs. Chaining is the most simple way of runing a task too large to fit into memory at once.
It gets split up in a series of somewhat independant programs that get run after each other. A batch file is eventually the most simple way of chaning. Early on Mainfram OS did support more sophistiveted ways of chaining where successive parts got loaded into the same memory while keeping exverything but the newly loaded section intact, thus able to share data.
Unix' idea of pipeing streams thru programms is somewhat of an inbetween here. As with mainframes before, early micro also supported chaining beyond a batch. Applesoft was missing that feature, but programers soon developed some replacement code.
Unix like environments usually offer some kind of EXEC function to implement chaining. While often called an overlay mechanism, it's strictly just chaining, as the whole program gets replaced. Overlays on the other hand are a way where a program keeps running but exchanges parts of the code or data on an as-needed base.
This can be large potions, like a word processor loading a spell checker or mail merger overlay, like MicroPro's WordStar did. Or smaller parts like a single function. Ofc, loading each function seperatly when needed may prove inperformant compared to loading bundles of functions.
Overlays are mostly a programing issue and support is done by the programming environment in use. A common feature of all overlay techniques is that there is a predefined memory area where the overlay gets loaded. There can be more than one overlay area, and ofc. Overlays can be used with data as well.
Data overlays may need to be copied back to the extended storage before the overlay area can be reused. Dynamic Linking can be seen as a special way to handle overlays. Especially when an unlink feature is offered. Unlike simple overlays memory can be used less wasteful with a thigher packing.
Also the linker may offer recursive linking to handle dependencies. Where overlays usually need to be placed at a specific address when copied into RAM, dynamic liked modules can occupy any address - and with unlink available, a different address each time it is used even within a single program run.
On a more generalized way there are two techiques to handle the management of an extended storege. For Swaping programm and data areas of an application are organized in segments with each segment being a a unit to be moved out to extended storage or in from such.
They can be of varying size. While most OS with swaping support only use these segments to organize a program like code, data, stack, heap and copy them in or out all at once, it can also be used as a way for programs to organize their overlays, but hand over thee management to some OS functionality. Unlike simple overlays segments can end up at a diffent address after beign copyed out and in again. Pageing in contrast splits up the memory or a part thereof in equal sized pages.
Many programs are restricted to this area and are unable to address any other kind of memory. All computers have this conventional memory. All computers also have an ability to address memory from K to K, but that area isn't available for user memory. It's needed for the use of the system--the video board requires some memory, and that memory uses the addresses from K through K.
Also, the system requires space for Read Only Memory memory chips that contain important software on add-in expansion boards; that memory goes in the space between K and K. The area from K through K is called the upper memory area UMA by Microsoft nowadays, but you'll also hear it referred to by its older name, the reserved area.
It became popular because Lotus version 2. Any kind of PC can use expanded memory, so long as it has the memory board to support it, or, as we'll see, a memory manager program. Expanded memory uses up 64K of memory addresses in the part of the reserved area between K and K for an EMS page frame.
It can be anywhere in the KK range, but it's most commonly at K. Meanwhile, and later processor chips support memory beyond the K address: 16 megabytes MB on a , MB on a That's why it gets a different name--extended memory. Again, few DOS programs can use extended memory. Since a is a much less powerful chip than a , a memory manager is somewhat limited in what it can do.
By default such drivers are placed in conventional memory. The upper memory area UMA is memory in the range between kb and 1 Mb. By default there is no RAM in this range as it is reserved for use with hardware that is able to map own memory to this range.
Additional hardware like mass storage controllers, network adapters It has to be loaded in config. If no contiguous free Upper Memory Block is available the driver will be loaded to Conventional Memory. Since UMA memory is managed in blocks the amount of free Upper Memory is usually larger than the largest contiguous free block. SYS in config.
Directly after an XMS driver is loaded in config. Expanded Memory can be either memory on an memory expansion card or a part of the main memory. The specification describes that this memory can be used by mapping a 64 kB large part to the Upper Memory Area between kB and 1 Mb. The latest EMS 4. EXE that has to be loaded in config.
Sometimes it will do worse. I knew a lot of people who used it to optimize DOS memory. It probably depends on your hardware. You can always try both to see which one runs your favorite games better. It also makes less of a difference today since we can throw more hardware at it more easily than we could in the early 90s.
And so on. What is a memory manager? If you liked this post, please share it! Like this: Like Loading Related stories by Dave Farquhar.
0コメント