For ATDI, models are a means by which it saves its customers time, money and resources.
ATDI has been involved in numerous mobile and fixed network roll-outs; indeed right now, ATDI engineers are acting as consultants to deployment teams in three national networks in northern Europe. In around six other instances, ATDI engineers are assisting in developing deployment business plans. Within each of these jobs, prediction models are being used in the necessary modelling. Accuracy in modelling depends hugely on the accuracy of these prediction models and it is critical that both customer and supplier share an understanding of that accuracy in order to best exploit what the body of knowledge in prediction has to offer.
A model predicts the signal level arriving at a network subscriber. It therefore predicts the traffic in the network which in turn allows the intra-network interference to be arrived at for a given available spectrum. Together these effects determine the service from which the subscriber will benefit. But predictions use a model of reality that is not, by definition, reality. To improve confidence in the model, a confidence margin can be added but the more error, the bigger the margin. The bigger the margin, the more sites needed to be certain of meeting requirements. It is essential that everyone understands the model selected for the job.
Prediction models comprise three parts: an environment model, a terminals model and a propagation algorithm. Each contributes to the overall error in prediction. Models fall into two types: empirical and physical. Empirical models rely on drive testing; using copious measurements to calibrate an otherwise wildly inaccurate polynomial for the environment and terminals in use. Physical models describe how the phenomena of diffusion, refraction, reflection, diffraction and interference act on the wave launched from the base station as it transits to the subscriber. Physical models have a much wider range over which calibration holds though it is always prudent to re-calibrate when using at the edge of their applicability. While this suggests two distinct camps, reality is a mix of both. So how do the types perform?
The recent network roll-outs suggest that there is little to choose between suitably selected models from either camp. Developments of the Hata model to yield the COST 231 Hata algorithm do prove too inaccurate. Opening up the COST model to expose all of its parameters and calibrating using a comprehensive range of variables yields the ATDI COST 231 'Open' model. In performance this matches the well-established ITU model comprising the Deygout diffraction method with empirical correction for excess losses due to a cluttered subscriber antenna. Typically an average error of better than 1dB and a standard deviation of error of better than 5dB at UHF can be achieved for both. This sweeping statement is given as an indication without definition. The business of model calibration starts with data collection and this is a science of its own in which huge myth still exists.
So what's the conclusion? When selecting a prediction model, it is critically important to understand the application to which it is to be put and to select against the published model application limits. It is important to calibrate the model; for empirical models this needs significant work. And the performance of the model must be kept under review as roll-out progresses; reflection modelling or pseudo-ray tracing can be added in mountains and valleys but it is a computational overhead and should only be used when needed. Overall however, both customer and supplier need to understand the prediction models from which choice could be made and select according to the needs of the job in hand rather than simply selecting the model used in the last deployment.