Browse Prior Art Database

A robust Pipelined ADC Calibration Algorithm with Double-Value Set

IP.com Disclosure Number: IPCOM000124399D
Original Publication Date: 2005-May-20
Included in the Prior Art Database: 2005-May-20
Document File: 5 page(s) / 210K

Publishing Venue

Siemens

Related People

Juergen Carstens: CONTACT

Abstract

The ADC (Analog to Digital Converter) pipelined background calibration algorithm uses the correlation between an injected low frequency random signal and the output of the A/D converter (see Fig. 1; PSD: Power Spectrum Density) to estimate the DAC (Digital to Analog Converter) weight errors (DAC and gain error). The estimated errors are used to determine a set of calibration values. This background calibration is not robust against low frequency signals in the same band as the pseudo random noise used by this calibration algorithm. For this reason it is not well suited to various DSL (Digital Subscriber Line) applications (e.g. ADSL, VDSL, SHDSL, MDSL etc.), where these signals are often present (cf. Fig. 2; e.g. automated tax information line tele-tax; Inter-Sysmbol Interference ISI). When a low frequency input signal is applied, depending on the amplitude of it, the correlation process can result in a wrong estimation of the errors in the capacitors. In this case the result of the calibration is an A/D with linearity even worse than a non calibrated A/D. Moreover, the ADC background calibration algorithm, which is very effective to increase the linearity of the A/D pipelined converter, is sensible to the low frequency component of the signal to convert that may fold in the PSR (Pseudo Random Sequence) band (see Fig. 3).

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 54% of the total text.

Page 1 of 5

S

A robust Pipelined ADC Calibration Algorithm with Double-Value Set

Idea: Luca Gori, AT-Villach; Michael Szegedi, AT-Villach; Dr. Dietmar Straeussnigg, AT-Villach;

Peter Bogner, AT-Villach; Antonio Di Giandomenico, AT-Villach

The ADC (Analog to Digital Converter) pipelined background calibration algorithm uses the correlation between an injected low frequency random signal and the output of the A/D converter (see Fig. 1; PSD: Power Spectrum Density) to estimate the DAC (Digital to Analog Converter) weight errors (DAC and gain error). The estimated errors are used to determine a set of calibration values.

This background calibration is not robust against low frequency signals in the same band as the pseudo random noise used by this calibration algorithm. For this reason it is not well suited to various DSL (Digital Subscriber Line) applications (e.g. ADSL, VDSL, SHDSL, MDSL etc.), where these signals are often present (cf. Fig. 2; e.g. automated tax information line tele-tax; Inter-Sysmbol Interference ISI). When a low frequency input signal is applied, depending on the amplitude of it, the correlation process can result in a wrong estimation of the errors in the capacitors. In this case the result of the calibration is an A/D with linearity even worse than a non calibrated A/D. Moreover, the ADC background calibration algorithm, which is very effective to increase the linearity of the A/D pipelined converter, is sensible to the low frequency component of the signal to convert that may fold in the PSR (Pseudo Random Sequence) band (see Fig. 3).

The proposed new system to solve the abovementioned problem uses two sets of calibration values, which can be evaluated separately, as shown in the example in Fig. 4. After the power-up of the system, a first set of calibration values is evaluated (start-up calibration set) by the first DSP (Digital Signal Processor; also algo-machine), as shown in Fig. 5(a). This operation is done in special conditions: no input signal is applied to the A/D converter. The result of this first calibration is a set of values that optimally calibrates the A/D.

During the normal operation (input signal applied to the A/D) the operative conditions might slightly change (Vdd, Temperature, etc.), and a new set of calibration values could be required. A second copy of the digital calibration algorithm will run a background calibration process (see Fig. 5(b)). When a new calibration set is available, a DSP (Digital Signal Processor; i.e., the data-pump is already present on the application board) can compare this new set with the startup calibration set and decide if the new set is valid (i.e. not affected by low frequency components problem); if it is valid, this new set will be used (Fig. 5(c)). This background process can be iterated by the two DSP (, and the best coefficient set can be always available and ready to be used (Fig. 5(d)). Another interesting possibility is to r...