There is a software design techniques where you make all of your allocation of resources at the start of program execution. This is useful for embedded, OS kernels, and software in general. I don't know that this technique is taught (or learned) any more.
The basic idea is just as I described. When the program starts it makes all of the allocation of resources (memory, files, etc.) that it is going to make. Memory is allocated and prepared; it won't be free()
'd until program exit. The files you're going to use are opened and held, as opposed to being opened and closed as the program runs. In reality, some things like client sockets are not possible to pre-allocate with some OS's. But the general principle holds.
This style has lots of benefits. For embedded and server software it is about long running software and consistent behaviour over time. For example, malloc()
and free()
can fragment memory, leaving the program with both free memory but not able to have any it can use. You just don't when it will happen. Less important is that allocation algorithms is that they are usually slow (or slow down with time). (Failed software is worse than slower; and, to boot, you can get fast allocation algorithms, as long as you're willing to have excess resources to spare).
Note: obviously you won't get benefits of this approach if someone cheats and puts an "clever" allocation algorithm on top of these resources.
If you're used to approaches that don't have this pay-once approach, you're probably asking "how do you get work done?" "How do you think about design with this kind of approach?"
To be of help I found an article that discusses this technique. (See section 'Static with allocation')
Most often I see articles that discuss the technique for specific tasks or functional areas. For example the article describes a no-copy TCP/IP, with a few diagrams showing how they statically laid out memory use and the performance relationship.
Since I realize that I don't know of any good resources for someone to study this concept, here are a couple of names that I have encountered.
Static resource allocation. I think that the best name for this technique is 'static resource allocation' ' the resources are fixed, and unchanging. However, a google search shows that this term is mainly used to refer to allocations made by the compiler with code like:
char Buffer[100000];
(This is valid, but only a subset of the overall concept).
Fixed allocation. This term is also used to refer to technique, usually to contrast against allocation algorithms.
Fixed memory pool.