Traditionally, performance characteristics of machines have been done by comparisons of the run time behavior of well-known applications. An alternative approach taken here consists in creating artificial kernel applications whose only task it is to determine one particular machine parameter in isolation. Findings based on this method, at first glance, may not appear as practical as those based on the former one, but are actually more fundamental and, therefore, more predictive. In this paper, memory delay times were investigated to determine the performance degradations caused by access to far-off memory sections. The elementary model chosen here assumes that the entire memory is divided into sections with different access levels depending on the respective processor's location. The properties of the different levels can be described sufficiently by two parameters, i.e. first, the size of the corresponding memory section, and second, the time that elapses before data is loaded from that section. This model makes it possible to deal with expected values for access times in dependence of the size of the accessed area. A special program associated with this problem determines a finite set of expected values, integrating hardware-produced probability density functions via Monte-Carlo integration. This set of values can also be derived theoretically from the same abscissae using hardware topology information, but allowing for free level-specific delay times. To obtain the free parameters, the theoretically gained set is best fitted to the experimental one. Theory and measurement supply impressive results. Agreement and reproducibility are excellent, at least for the Origin2000.