# Tag Archives: Research

## Back from ECCTD 2011

After the crystal-clear scientific presentations at ECCTD, I'm now back in the mist again.

So, I’m back from ECCTD 2011 since late Wednesday, and up here at the southern edge of the northern half of Sweden, the mornings are misty and the leaves are turning yellow. Visiting Linköping was every bit as pleasant as I had expected. The conference was held at Linköping Konsert & Kongress, which is beautifully located at the center of the city, right next to the Linköping Cathedral.

# The conference

The conference was excellently organized by the Electronics Systems division at Linköping University and the conference committee. I was particularly impressed with the student helpers. Not only were they helpful, kind and attentive, but quite a few of them also turned out to be passionate about data-converter research and development [one of the healthier states of the human mind, BTW 😉 ] and we had several interesting conversations on the topic. I couldn’t possibly have felt more welcome.

Professor Borivoje Nikolić speaks about managing variability.

After we all had been welcomed by the conference general chair, prof. Lars Wanhammar, Linköping University, the conference started with a plenary presentation “Managing Variability for Ultimate Energy Efficiency” given by prof. Borivoje Nikolić from UC Berkeley, USA. The conference then split up into various sessions which are described in detail in the program. ECCTD is a rather broad conference, but there were at least three dedicated data-converter sessions: “Sigma-Delta Modulators“, “Data Converters“, and “Pipelined ADCs“. I had the honor of chairing “Pipelined ADCs“, and I presented my own contribution “Area Efficiency of ADC Architectures” in the “Data Converters” session. I might come back to the content of that paper in another post, but in short (for those of you that were not there), it surveys the chip area vs. performance in speed and resolution for just about every ADC implementation reported in the scientific literature all the way since 1974 – approximately 1500 papers. A normalized area measure

$A_{Q} = \dfrac{A}{{2}^{ENOB}}$

was proposed based on the observed correlation between absolute chip area (A) and effective resolution (ENOB). State-of-the-art AQ – a.k.a. “Area per effective quantization step” – was seen to be independent not only of ENOB, but also of sampling rate over a broad range of sampling rates and resolutions, respectively. It is also approximately independent of CMOS process node. Chip area per effective quantization step was then compared for individual architectures, and design guidelines derived for area-optimal ADC architecture selection at any given speed and resolution specification. It was seen that there are large differences in the peak area efficiency achieved with different ADC architectures. There is for example a factor of 3 difference between SAR and pipeline, and a factor of 10 between pipeline and flash. Such big area differences can translate to a lot of money if you’re developing high-volume ADCs. So make sure you get hold of this paper as soon as it comes up on IEEE Xplore.

The blogger as session chair. Photo: Mark Vesterbacka

Professor Mark Vesterbacka, Linköping University had to push the electronics in his mobile phone to the maximum in order to document my chairing efforts in spite of the low light. Thanks for sending the picture.

# CWCP winner

I could notice a slight peak in blog visitors yesterday. I assume that many of you wanted to know who won the Connect-with-Converter Passion (CWCP) prize, and I apologize for not being as fast as Dr. J Jacob Wikner who was blogging live from ECCTD and managed to fire away several conference-related post on Mixed-Signal Electronics while ECCTD was still developing. One of them correctly revealing that we had a CWCP winner already after the first day. And the winner is:

CWCP-winner Kiran Kariyannawar

Kiran Kariyannawar from Ericsson AB, who showed the enthusiasm and dedication necessary to win the CWCP prize for ECCTD. Congratulations Kiran! Kiran was there together with other Ericsson colleagues to demonstrate The Connected Tree and how to transmit audio and video signals through the human body. Quite far out compared to most demonstrations I’ve seen at scientific conferences. Very fun (at lest from a tech nerd’s perspective), and I’m sure they will figure out a lot of applications for it eventually, although for now they didn’t seem quite sure what to do with it. At least not with the connected tree. I played a bit with the human-body transmission (by becoming the channel), and I think it could be great for DJ-ing. I was just about to get it to rock big time when I started to realize the other delegates need for less noise and gave it up. If only I had a few more minutes to work out that groove …

The next big thing in DJ-ing? Just intermittently add a human body connected between those metal plates – preferably in a rhythmic pattern – and you're all set.

# Other impressions

The conference dinner was held at the Air Force Museum – a place I’m likely to return to again to have more time to look at everything. Most likely with the rest of the family. A few photos below will give you some idea of the location. Finding unorthodox locations that can make the conference dinner extra memorable is probably a real challenge to most organizers. Unless they start taking us to outer space and back, I believe that the abundant food stations in combination with the breathtaking beauty of sea life shown at Monterey Bay Aquarium (ISCAS 1998) will remain my personal favorite for the rest of my life, but with ECCTD 2011 now being among the top two. Excellent work!

A classic Swedish beauty.

Chopper techniques. Large implementation.

A peaceful dinner ...

A missile of some kind, with a sign in Swedish saying "DO NOT PUSH HERE". Now, how irresistible is that on a scale to ten? Photo: M. Reza Sadeghifar

Having been to a few conferences, you start to recognize some faces that keep coming back. I had the pleasure of meeting delegates I’ve recently met. Some at NORCHIP, some at ICECS, and others at IWADC. It was great seeing you all. That is the real value of going to conferences.

Peace!

ECCTD 2011 face-recognition. Note that we observed a severe Linköping bias here that might be compensated for in "future work".

## Converter Passion Hall of Fame and other blog updates

Just a short notice about the Converter Passion Hall of Fame that was opened here on the blog today. There are actually two separate halls at the moment – one for each of the two most commonly used ADC figures-of-merit – and may expand in the future. You’ll find them under the aptly named menu item “HALL OF FAME“.

There has also been some changes in the readout from the Converter Passion ADC FOM-o-meter, but I’ll tell you more about it in the next post, right after this one.

## ADC FOM: What is a good figure-of-merit?

No rocket science: A good FOM should simply reflect the merits of the ADC

So, what is a good figure-of-merit (FOM) for analog-to-digital converters (ADC)? What is technically sound? What makes a figure-of-merit relevant, and what is good practice when using it? I’ll not able to cover everything in one post, so the plan is to keep returning to this subject over a number of posts.

# What is a figure-of-merit?

So, what is a figure-o-merit in the first place, and what should we expect from a good one? Well … Wikipedia currently defines it as:

“… a quantity used to characterize the performance of a device, system or method, relative to its alternatives. In engineering, figures of merit are often defined for particular materials or devices in order to determine their relative utility for an application. In commerce, such figures are often used as a marketing tool to convince consumers to choose a particular brand.”

Another quote from the Wikipedia entry touches on the question of relevance and proper use:

“When used in deceptive advertising, the deception lies more in the question of relevance rather than truth since the number quoted as a figure of merit may not be enough to determine performance when comparing products.”

As a consumer of whatever is marketed or assessed with a FOM – be it commercial ADC parts or scientific results – I also want to know that the FOM is designed so that anything awarded a state-of-the-art value is what actually has the best performance or is more useful to me. That is a well-conceived FOM. A well-conceived FOM also gives equal value to all objects that have equivalent performance with respect to whatever the FOM is supposed to measure, while an ill-conceived FOM can give widely different values for equivalent actual merits. In short, I would say that:

A good figure-of-merit should accurately reflect the merits of the ADC in the context and for the purpose which the figure-of-merit is used.

You are welcome to share your thoughts on this. What would you expect from a good FOM? What criteria do you use to identify an ill-conceived FOM?

# Purpose & context

A figure-of-merit is used for a purpose and in a context. Common purposes include:

• Marketing
• Product performance comparison
• Comparison of scientific achievement
• Identifying the best component for a particular task

The context is also important. If you want to apply a FOM to a set of commercial part specifications to find out which part is the best for your current project, then you can define pretty much any FOM expression you’d like, as long as you know it will help you detect the best circuit. The context is local – your project and your organization. You only have to convince your project team and perhaps the steering group that the FOM is technically sound and will do the job.

If, on the other hand, you wish to propose a FOM that can be universally applied to compare the merits of widely different circuits, the context is global. The demands will be higher – both with respect to the mathematical expression and your ability to convince others that the FOM is sound. We will focus on this latter case.

ADC FOM vs. CMOS node. This FOM improves with scaling.

## Universal comparison of merits in a global context

Universal comparison of merits can be divided by at least two major purposes: (I) product comparison and (II) comparison of scientific achievement. When comparing the merits of a product, it doesn’t matter if a FOM is biased towards certain design parameter values. If the FOM correctly represents end user satisfaction, it is irrelevant whether or not you can always achieve a better FOM by reducing power, increasing the voltage supply, or by using a more recent manufacturing technology. If new technology makes the design easier each year, who cares? For the end user it doesn’t matter how easy or hard it was for the engineers to develop the product – as long as the FOM measures how good the product is for the user, it is all well.

When a FOM is used to measure or claim scientific achievement and progress, it does matter if certain corners of the parameter space always gives the best results. Then the FOM becomes a measure of how close you are to that corner, rather than a measure of some universal achievement. This is actually the case with the most commonly used FOM today

$F_{A1} = \dfrac{P}{2^{ENOB} \times f_s}$

It was shown in [1] that a distinct feature of $F_{A1}$ is that it improves with every step of CMOS scaling. Roughly $F_{A1}$ improves by 100 times for every 10 times of process scaling, as seen in the FOM vs. CMOS node plot above. In practice, it means that organizations that have the possibility to use the latest technologies will always win the race with respect to $F_{A1}$, while those that refine their design in other ways (without moving to a newer technology node) have practically no chance. Its usefulness as a universal measure of scientific achievement in power-performance trade-off can therefore be questioned.

That said, it should be understood that designing in deeply scaled nanometer technologies is certainly not without challenges. Quite the contrary – it has many design challenges, and it is a scientific or engineering achievement to break new ground and design ADCs in the most recent CMOS nodes. But the point here is the particular FOM $F_{A1}$ and that process scaling almost automatically improves it. A research group that develop innovative architectures or circuit techniques that improve the power-performance trade-off within the same node is therefore much less likely to publish state-of-the-art $F_{A1}$ values than a group that focus on the problems of porting its design to newer technologies. Hence $F_{A1}$, the most commonly used FOM today, heavily promotes the use of new process technology, and this should be understood when comparing the FOM reported in different papers.

I also want to clarify that I’m not suggesting that those that have defined the state-of-the-art evolution of $F_{A1}$ have effortlessly surfed the wave of CMOS scaling. Many, most, or all of these designs have reached state-of-the-art through a combination of technology scaling and innovative techniques for power reduction. As an example, the design by van Elzakker et al. [2] currently listed as state-of-the-art on the FOM-o-meter page, combines the advantages of 65 nm technology with a low-energy multi-step switching charge-transfer technique to reach a truly impressive result.

# Industrial and scientific relevance

As discussed above, a FOM may have relevance for comparing the performance of commercial products without being suitable for comparison of scientific achievement. In my opinion, $F_{A1}$ has industrial relevance only to the extent that it measures what the buyers truly want from an ADC part. I’m not in a position to fully assess whether $F_{A1}$ is representative of the market demand, or if the market has simply been taught by ADC vendors that “this is what you really want” 🙂 so now the sourcing people keeps asking for it. It would certainly be interesting to hear your thoughts on that – both from a sourcing and from a vendor perspective.

Regarding scientific relevance, $F_{A1}$, a.k.a. the “ISSCC FOM” has some redeeming features in that it can be shown that an ADC with state-of-the-art $F_{A1}$ is indeed highly optimized with respect to energy per sample. On the other hand, $F_{A1}$ displays such a strong correlation with many design parameters, that it can also be shown that a state-of-the-art $F_{A1}$ can only ever be achieved at certain sweet spots and golden corners within the design parameter space. Its almost canonical status as a global measure of scientific achievement, and possibly even criteria for publication, is therefore in my opinion questionable. Or at least something that needs a serious debate. I’m sure that many of my blog readers have an opinion too, and it would be great to hear what you think. It is no problem if you have a different view, I’d like to hear it anyway. Perhaps you can bring me back to “the narrow path” 😉 …

I hope to get back with more details on sweet spots and corners in future posts, but for now the FOM vs. CMOS node plot can serve as illustration of a “golden corner” with respect to process technology.

# FOM discussions in the literature

There are only a few literature references to this post, simply because I’m not aware that any longer discussion of the topic has taken place anywhere. But if you are aware of any scientific papers, business magazine articles, application notes or web pages treating the title question of this post – “What is a good ADC FOM?” – then I’d be very happy to hear about it and to include references to them here. Please use the comment function, or email me.

Bult includes in his ESSCIRC 2009 paper [3] a brief but good discussion on how the current scientific competition is centered around $F_{A1}$, and its consequences on power dissipation reporting practices – a topic I will return to in a future post. Bult also reflects on the relevance of $2^{ENOB}$ and $2^{2\times ENOB}$ in view of observed and expected correlation between ENOB and P in actual circuits. In Carsten Wulff’s Ph.D. thesis [4], there is a discussion of figures of merit, and Murmann also discusses the relevance of $2^{ENOB}$ and $2^{2\times ENOB}$ briefly in his CICC 2008 paper [5].

Now, if after reading this far you are tempted to try your hand at designing your very own and much better ADC FOM, then you have the perfect DIY-kit here at Converter Passion. It will help you to shape and define almost any FOM of your liking. You can always start building it from scratch if you feel adventurous, but why not start with the “Mother of all FOM”, and the generic FOM classes I’ve put together for you.

Enjoy, and don’t forget to share your views on ADC figures-of-merit with the rest of us.

And … if you invent a smashing ADC FOM, or already have published one that I’ve missed, be sure to post it in the comments.

Let me know if you want help getting the WordPress LaTeX to work. It can be used in the comments as well. Here’s an on-line LaTeX equation editor that makes life easier. Because the WordPress LaTeX parser is much less forgiving, the code sometimes need some final polishing before it renders correctly.

References

[1] B. E. Jonsson, “On CMOS scaling and A/D-converter performance,” Proc. of NORCHIP, Tampere, Finland, pp. 1–4, Nov. 2010.

[2] M. van Elzakker, E. van Tuijl, P. Geraedts, D. Schinkel, E. Klumperink, and B. Nauta, “A 1.9μW 4.4fJ/conversion-step 10b 1MS/s charge-redistribution ADC”, Proc. of IEEE Solid-State Circ. Conf. (ISSCC), San Francisco, California, pp. 244–245, Feb., 2008, IEEE.

[3] K. Bult, “Embedded analog-to-digital converters,” Proc. of Eur. Solid-State Circ. Conf. (ESSCIRC), Athens, Greece, pp. 52–60, Sept., 2009.

[4] Carsten Wulff, Efficient ADCs for nano-scale CMOS Technology, PhD Thesis, Norwegian University of Science and Technology, Trondheim, Norway, Dec. 2008.

[5] B. Murmann, “A/D converter trends: Power dissipation, scaling and digitally assisted architectures,” Proc. of IEEE Custom Integrated Circ. Conf. (CICC), San Jose, California, USA, pp. 105–112, Sept., 2008.

## Paper Overflow

The exponential growth in ADC publications

Two years ago I set out to read, process and systematically analyze the data from every single scientific paper ever published in my main field – monolithic A/D-converter integration. One thousand five hundred scientific publications, and counting! It seemed, as they say like a great idea at the time, but it sure isn’t for the faint-hearted. No regrets, though.

I hope to be able to share some of the insights and knowledge gathered from this study right here on the blog. Not everything will be directly helpful in your design project of course, but there are other values in life, right? We’re just not sure exactly what they are yet …

In this post I want to highlight the steady increase in publications we have seen over the 36 years or so since Baldwin’s delta modulator [1]. I believe it was the first monolithic ADC to be reported. (Please send me an email or post a comment here if you consider another publication to be the first.)

You will need many sheets of paper if you plan to print all those publications …

The plot above shows how the total amount of ADC publications have gone from a handful to over 100 scientific papers per year. (For practical reasons, some non-IEEE sources were not included in the count.) Thanks to the logarithmic “Y” axis we can easily see that the total number of ADC papers have grown exponentially for at least 20 years now, and there’s no sign of slow-down.

Personally I see this as good news. I am passionate about A/D-converters, and this means that my field is given more significance and there is also more experimental data available to use as reference points for your own work.

But … how do we cope with an ever-increasing influx of scientific reports? Are there any negative effects at all? Is it OK if every company needs to allocate one senior designer to just read all the publications? Is there an upper limit to how many ADC papers you want to see in publication 2020, or is 250 papers a significant improvement over 100? Would 500 be even better?

Personally I’m leaning heavily towards “the more, the merrier“.

But that’s me. What do you think?