Only Over-the-Air sync testing fully characterizes network performance.
The rollout of 5G networks worldwide is drawing sharp focus on the need for phase/time synchronization. Poor phase synchronization will lead to interference. The specifications laid down by 3GPP WG-4 clearly state that any pair of cells on the same frequency and with overlapping coverage are to be synchronized to within 3us. Although the 3GPP specification is given as a relative 3µs alignment between cells, the measurement is usually interpreted as alignment of each cell to within to ±1.5µs of a standard reference time such as UTC.
In an operator’s 5G TDD network, if two overlapping cells are out of sync, there is the potential for the downlink from one tower to interfere with the uplink from user equipment at a different tower. The problem is compounded because of the elimination of guard bands between the allocated spectrum of different operators. Even though adjacent allocated spectrum is notionally non-interfering, in practice it can still cause interference. It is not possible to completely filter away out-of-band emissions, and, without guard bands, these emissions can be at significant power levels within the band of an adjacent cell. If transmissions between different networks are not synchronized, interference can result, especially because the interfering tower, in this case, can be closer than the user equipment. An additional problem results when the poorly synchronized in-band and out-of-band interfering transmission drives the receiving amplifiers into nonlinear operation. The resulting intermodulation leads to further interference and performance issues.
TDD systems like 5G NR, therefore, must maintain the specified phase/time synchronization, and operators must design their networks and select network appliances so as to stay within their timing error budget. Appendix V of ITU-T G.8271.1 offers example error budgets for networks using different classes of equipment. From this, we get the familiar threshold of ±1.1µs at reference point C between the core network and the end application. Even if it isn't always convenient, point C is often the final point in the physical network where we are able to access a 1pps signal or the PTP packet communications from the upstream clock allowing us to measure synchronization. Downstream from this point sync testing gets tricky. We may no longer have traditional signals present that allow synchronization testing.
However, now with the accelerating roll out of 5G which is much more reliant on TDD and with the increasingly complex and disaggregated nature of the CU, DU, RU chain, it is more important than ever that phase synchronization is measured beyond reference point C and take into account the error contribution of the fronthaul components. Ideally, the measurement should be done over the air so as to measure the full synchronization performance of the network. This means more than a quick single number check of time alignment but a full analysis of the static and dynamic behaviour of the synchronization just as we have always done at reference point C.
Historically, it was very problematic making a robust and detailed analysis of the timing after point C. However, this may not have been a big issue as, prior to the implementation of 4G and 5G TDD, frequency synchronization was all that was required. We may have been content until now with the ITU recommendation that beyond reference point C, we can account for further time synchronization degradation with 150ns noise contribution from the end application and 250ns to account for failure events.
Part of the error budget developed by the operator will be an allowance for failure events such as re-arrangements and short-term holdover which will occur during loss of PRTC traceability or short interruption of the GNSS signal. It may only be once an operator starts measuring synchronization performance over the air that the question arises as to how to account for the budget allowances for failure events, and this raises an interesting thought. If you are analyzing a mobile cell transmission for its synchronization behaviour, what is the proper testing threshold for max|TE| at the air interface? The immediate answer is of course ±1.5µs. But wait, what about the sync budget allocated to holdover and rearrangement. If we assume that the network is running without failures when it is being tested, then shouldn’t the error budget for re-arrangements and short-term holdover be removed from your pass/fail threshold?
In Appendix V of ITU-T G.8271.1 several example scenarios are given where there is a failure in some part of the synchronization chain. These different examples have allocated sync budget for holdover and re-arrangement of anything from 250ns to 620ns. In the latter case, one could argue that when the system is operating without failure, the time alignment at the air interface should be ±880ns rather than the familiar ±1.5µs.
Well there is no right or wrong answer. What is important is that the operator constructs their own budget and that they carry out over the air tests to verify that their complete end to end network is operating as designed. Tests should ideally be conducted over an extended period of hours or even days to identify issues that might be related to the time of day. It also seems reasonable that elements of the error budget that relate to fault situations such as re-arrangement and holdover should be excluded from the target synchronization level that is to be attained during normal operation.
Related literature: Over the Air technical primer
Related product: Calnex Sentinel
Bryan Hovey
Product Manager