One of the first things I did at the Jožef Stefan Institute was to design a small, compact UHF receiver around the TDA18219HN chip from NXP. Requirements at the time for spectrum sensing only called for precise radiometric measurements of incident signal power. However now it is time to move on to more advanced detection methods and it would be nice if my hardware could capture the actual signal waveform instead of just its amplitude. Because of that I have spent quite some time recently working on a new version of the receiver.
As usual, things are not going as well as I hoped. In some cases the output signal from the tuner is badly distorted - something which I did not notice when all I was interested in was the signal amplitude. It looks like a problem with automatic gain control, so I dug out as much as possible from the little documentation there is available on this chip and did some measurements of my own.
Image by NXP
As the diagram in the datasheet shows, this chip has seven stages with variable gain. Coupled with detectors they form several feedback loops that try to keep the signal level throughout the tuner approximately constant and within the linear region of the analog circuitry. This is important since this tuner is designed to work with both cable networks and wireless terrestrial reception. This means it must work with signal levels that differ by almost 10 orders of magnitude.
Except AGCK and IF AGC all of these stages change their gain in discrete steps. AGCK can set its gain continuously and compensates for step changes in other stages to give the illusion of continuous gain variation.
IF AGC gain is controlled externally via an analog pin and is meant to be controlled by whatever is decoding the signal (in my experiments this pin was always grounded to set the lowest gain). All other stages are controlled automatically by integrated logic. NXP doesn't tell you in detail how it works ("the gain is distributed to offer best trade-off between linearity and noise" is about as far as the datasheet goes).
There are some I2C registers that apparently affect the AGC behavior, but except for Take-Over-Point setting they are mostly not documented and in the end the only thing you can do is basically follow the register values in the reference driver implementation. I tried playing a bit with these settings but didn't see any obvious difference in performance.
The I2C control interface however does allow you to monitor the current gain of AGC1, AGC2, AGC4 and AGC5. So it's possible to at least monitor the reaction of the tuner to various input signal power levels.
The following graph shows how the gain of the tuner changes to keep the output level constant when the input power is changed from -100 to 0 dBm. Shown are cumulative gains at individual stages (e.g. AGC4 line includes gains from AGC1, AGC2 and AGC4). The line labeled "other" shows the total gain and includes stages that don't allow direct monitoring (AGC3, AGCK, IF AGC and possibly others). Total gain was calculated from the signal level measured at the output of the tuner.
In the case of the graph above, the signal was not distorted (although there are still some strange variations in gain, like the dip between -80 and -70 dBm input power). I'll keep the case when the signal gets clipped for a later blog post.
While it's interesting to poke around with a stick inside a black box like this, it's not really a productive way to spend time. Unfortunately NXP doesn't offer any kind of design support for these chips so I'm mostly on my own for solving this (they don't even have a distributor for Europe any more). When I was choosing tuner chips two years ago this one came up on top by its specifications and availability, however now the secretive nature of NXP's products and lack of documentation is becoming a larger and larger obstacle in further developing this design.