Reader question: What is an ADC?

You'd think that "What is an ADC?" must be a simple question to answer, right? But when scientists meet here at Converter Passion, dissection is inevitable.

Blog reader Michiel van Elzakker posted a more philosophical question in the Q&A section, and I think it deserves a dedicated post:

— “What is an ADC?

The question – at first glance seeming like a trivial entry-level question (which is also welcome here in case you wonder) – turned out to have a lot of scientific depth and potential. Michiel continues:

“There can probably be some consensus on a short answer: “Something to convert an analog input into a digital output”.

How about a long answer? Who would like to share his opinion? ;)

– Is a standard digital flip-flop in fact a very low power 1-bit ADC?
– Is it okay to use external calibration to improve an ADC’s accuracy? And if a PC performs the actual correction, does that make the PC part of the ADC?

References & supplies
– Is 0 dB power supply rejection sufficient? 10 dB? 50 dB?
– How many reference voltages can I use? I would like 2^N of them!
– Is it okay to use a reference clock? At a higher frequency than the actual sample rate?”

As I understand Michiel’s question, it is about the limits of what we would even consider calling “an ADC“, and also what constitutes a “complete ADC“. Interesting indeed! What are those limits according to you? Is the calibration PC part of the ADC? Is a flip-flop a 1-b ADC? Share your wisdom and opinions with the rest of us!


6 responses to “Reader question: What is an ADC?

  1. Lately some of the students here have been looking at continuous-time ‘ADCs’ where there is no concept of clocks. Data is event triggered and typically you have a delta-modulation at the front-end. Tsividis has a few really good papers on this. Quite straight-forward idea as such, but from a mathematical point of view it is very interesting – you do not have any aliasing effects, for example, and no concept of a noise floor (well, give or take…)

    Also, I am not sure about the calibration thing – after all most systems have some kind of loop on top to crank out extra performance, may it be AGCs or equalizers or what have you. On the other hand, you could look at say VCO-based ADCs and in that case a calibration and post-linearization is kind of inevitable… Hmm.

    Interesting question!

    • I think a continuous time “ADC” is a great example of the boundary of what is an ADC. The good thing about saying “continuous time ADC” is that the functionality is quite clear. When the same thing is just called “ADC”, the functionality is less obvious. For example, I remember that I was initially somewhat confused when I started reading “A New Successive Approximation Architecture for Low-Power Low-Cost CMOS A/D Converter” (JSSC).

      Calibration is indeed a perfectly sensible and sometimes inevitable thing to do. A majority of (Nyquist) ADCs could profit from it, at the expense of additional costs at chip, factory or lab. Especially in case of a (semi-) continuous calibration system, I don’t know whether the ADC stops before the lab pc. Many ADCs are published without calibration. Do the authors feel that external calibration is not allowed? Or did researchers suddenly start caring about costs?

  2. did researchers suddenly start caring about costs?


  3. Do you guys have any preference on the PC issue? I mean, would you want the answer to be “The PC is a part of the ADC”, or not?

    My preference is to not include the PC. I’ve just finished the rather tedious task of updating my ADC survey data, and I certainly don’t want to go back and include details about what PC setup people used … 😉 I’m fanatic about getting all the details, but that’s beyond the limit even for me 🙂

    A thoughtful treatment of an imaginary realization of the off-chip stuff would make more sense to me. So that you could estimate the P and A and any other implementation influences on the ADC performance.

    Now I’m of course talking about what I’d prefer to read about in a paper. I’d really like to maintain a “clean cut” towards the PC.

  4. I think you can leave the PC out … it makes most sense . After, in your studies, you are looking at the integration and cost/performance measures of those.
    Including the grand total, say a PC (or big off-chip FPGA), wouldn’t really add much value? It could of course be “translated” into a gate-count equivalent that potentially could be integrated, but I suppose in that case most system architects would claim it to be part of sea-of-gates rather than the ADC macro anyway.


Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s