Thursday, December 8, 2011

Equalizer design experiences

About two and half years ago we started a program for the design and development of wireline equalizers, both fixed and adaptive. Our first designs will be going into fabrication this month. This post is an attempt to document some issues and challenges we faced on this project.

1) Data and models of cables: Immediately it was obvious that there is a big hole in the data for cables. Our designs were for 5 Ghz and 1.65 Ghz. We found almost no data on the characteristics of cables for these frequencies. After a little research it turned out that we would have to do our own modeling using a TDR and Simulink/MATLAB and a few home grown tools. This is not an inexpensive activity. The boards required as interfaces to the machine cost about $10k a piece! The TDR is also a very expensive machine. We tried searching the web but found little available data. Manufacturers of the cables do publish data but it turned out that it was the wrong kind of data for our purposes. So cable characteristics are difficult to get.

2) Design tools: The second challenge was, the design tools available for design of ICs are, in our opinion not terribly useful when designing equalizers. Long sequences of really high frequency data are needed to check performance. These types of simulations can really run extremely slow and simulating a complete chip was almost impossible. A combination of SIMULINK and SPICE type simulators ( including Agilent ADS) were used but in our opinion left quite a bit to be desired. Equalizer designers beware!

3) IC process data: The fabrication houses that we selected ( “world class”) provided very good data on their processes. Again this data was good for about 80% of the design but 20% of the design could not be covered by the given data.

4) ESD protection: This is a problem for high frequency equalizer design in particular and in general a good ESD structure is difficult to do. The issue is this: If we use the characterized ESD cells then we have a challenge because of the parasitics. If we make our own ESD cells then we have no characterization data. So I suppose this makes ESD a major challenge in these types of devices. Remembering that the input lines actually come in from outside. ( Existing TVS devices are woefully inadequate for ESD.)

5) Test: The challenge of testing the equalizers looms large of course. A combination of standard lab equipment ( expensive) and custom made equipment is perhaps the best approach. Again the making of the test equipment is a challenge in itself as we found.

6) Demo boards: A real challenge. We had to go through a number of iterations with both PCB vendors and designs. The first PCB we did gave a clear impedance step at 150 Mhz and really caused errors in the measurements. Subsequent designs were great improvements but we still need more improvement and are working on it.

So the design and development of these wireline equalizers is, in our opinion not a “walk in the park” Good luck to all the equalizer designers and many congratulations to the successful ones. You guys have really licked the problems!

Wednesday, November 23, 2011

Thermal coeffcients for the dielectric constant of FR-4 and PCB materials.

In a recent group discussion on Linkedin, a member asked about the thermal coefficients of FR-4 and other common pcb materials. For interested persons a really good paper by John Coonrod, in the September issue of tabulates these coefficients.

Saturday, November 19, 2011

Ferrite beads: A useful circuit component

Ferrite beads are a very low cost and easy way to add high frequency isolation loss in a circuit without a power loss at DC and low frequencies. Ferrite beads are most effective at frequencies in excess of 1.0 Mhz. When these are used with the appropriate parallel capacitance, they provide high frequency decoupling and parasitic suppression. A brief paper on ferrite beads has been released by Signal Processing Group Inc and may be found at>>engineer’s corner.

Friday, November 18, 2011

Definitions of the Q factor.

Definitions of the Q factor

1.0 Unloaded Q : Energy stored in the component/Energy dissipated in component.

2.0 Loaded Q: Energy stored in component/Energy dissipated in component and
external circuit./load.

Why is power transfer and power quantities used in RF/MMIC circuits?

It is seen that in high frequency circuits, power transfer and power quantities are used. Typically dBm will be a standard unit in use. The question is: why? The answer to this question is found in relative performance of circuits at high and low frequencies. When frequencies are low, a voltage or current signal applied at an input of a circuit or chip is reproduced quite faithfully in the chip or at the operating terminals of the circuit. The same is true at the outputs. The reason is that parasitic quantities do not play as large a role at low frequencies.The situation is quite different at high or microwave frequencies. At these frequencies the voltage or current signal applied to the input terminal of a device package is not what the active device sees inside the package. The reason is of course, the parasitics of the circuit.If instead of input current or input voltage as the signal quantities we use power delivered to the input port then this problem goes away since reactances do not dissipate power. At the output, if the true available power gain of the device is given, we can calculate accurately what to expect assuming no power is dissipated in the parasitic elements. These reasons are why RF/MMIC circuits are almost always designed with power flow or power transfer considerations.

Saturday, November 5, 2011

Measuring temperature using thermistors: The Steinhart-Hart coefficients

The thermistor is a useful device for measuring temperature in certain situations. ( See the earlier post in this blog).The issue is that the thermistor has a non linear temperature response and thus may be complicated in use if used directly. The thermistor curve (temperature versus resistance ) has been modeled very accurately by an analytical equation which can be used to advantage in thermistor based temperature measuring circuits. The fixed coefficients in this equation are referred to as the Steinhart-Hart coefficients and can be either obtained from the manufacturer of the thermistor or be extracted from a resistance versus temperature curve. In any case, once these coefficients are known, an analog - digital system of circuits can be used with a microprocessor, to measure temperature relatively accurately. A paper on the extraction of the coefficients and the analytical approach has been released by Signal Processing Group Inc., and is available for interested readers at>engineer's corner.

Wednesday, November 2, 2011

Analog and mixed signal design: Measuring temperature with thermisters

Measuring temperature fairly accurately can be done using a number of methods. The sensors that are available to do this are the RTD, the pn junction, the positive temperature coefficient thermister ( PTC) and the negative temperature coeffcient (NTC)thermister. Among these options the NTC thermister seems to be used more and more in applications where the temperature rate of change is fast. The advantages and disadvantages of using this type of device are: Advantages, fast reaction time, small size, two wire connections and relatively inexpensive. The disadvantages are: the temperature versus resistance characteristic is very non linear, some kind of excitation is required, the temperature range is limited, subject to self heating and relatively fragile. In spite of the disadavantages the thermister is a choice many design engineers are making. The web has a number of good articles that are very helpful in the understanding of the thermister. Articles from Betatherm, Microchip technology and National Instruments to mention a few. The challenge does not lie in understanding the thermister itself. It is very easy to undertstand, at least as far as the users perspective goes. The challenge is in coming up with analog and mixed signal circuitry that interfaces with the thermister and allows for accurate measurement of the temperature. Signal Processing Group Inc., has developed a number of circuits which can be used with varying accuracies to measure temperature with thermisters. Interested users may contact SPG at> contact.

Friday, October 28, 2011

Microwave filters: Lumped element design to transmission line equivalents

As frequencies increase in filters, lumped elements no longer satisfy the requirements for various reasons ( parasitics, accuracy etc). At this point the designer may choose to convert the lumped element filter to a distributed element filter. One of the techniques used is transmission line stubs in the conversion. This technique is described in a white paper released from Signal Processing Group Inc. recently. The paper may be found at >> engineer's corner by interested readers.

Tuesday, October 25, 2011

RF Design: Electrical length

Sooner or later, the design engineer who is working in microwave or high frequency electronics, is going to come up against the concept of electrical length. In order to understand this concept lets work out the following arithmetic:

1.0 The wave number or phase constant = β = 2π/λ

For those unfamiliar with this, we recommend looking up the description of this quantity in the SPG blog at (

2.0 The electrical length is defined by θ = βl where l = physical length

3.0 θ = βl = (l/ λ) *360 degrees

Here λ is the wavelength of the signal in the applicable dielectric ( or sometimes called the guide wavelength).

4.0 For a frequencies in Ghz, this becomes: [360 * fGhz * l(cm) * √εeff]/30 cm

In this case frequency is in Ghz, physical length is in centimeters.

For example:

Let frequency be 1 Ghz.
Let λ = 0.8 λ(air) or √εeff = 1.25
Let l = 0.1 meters = 0.1E2 centimeters

Then :

θ = [360* 1*0.1E2*1.25]/30 degrees

θ = 150 degrees

Analog design: Magnitude and frequency scaling in filter design

Not infrequently, filters are designed using a different scale for their component parts than the final requirement. For example a filter could be designed for a frequency of 1.0, inductors in Henries and capacitors in Farads. The Smith chart uses scaling as a matter of common usage. It becomes a vital part of the design engineer's repertoire to understand this concept. This post deals with the very basics of scaling in a cookbook fashion for simplicity. Here are the rules: (1) If each inductor and capacitor is multiplied by a quantity 1/alpha, then the network is said to be scaled in frequency by alpha. If every resistance and inductance is multiplied by a quantity beta, and every capacitor is divided by beta, then the network is said to be magnitude scaled by beta.

Saturday, October 15, 2011

The wavenumber β or the phase constant

β is an important quantity used in understanding transmission lines and waveguides. It is not intuitive so this treatment presents a brief explanation of the quantity in the analysis of transmission lines, waveguide and other wave systems.

Sometimes β is referred to as the phase constant of the line or guide. If the cartesian coordinate system is used and a coordinate, say “z” is used as the direction of wave propagation then βz measures the instantaneous phase at point z on the line with respect to z =0.

In addition, voltage or current on the line is the same at any two points separated in z such that βz differs by multiples of 2π. Since the shortest distance between points where voltage or current is at the same phase is a wavelength, then:

βλ = 2π

( replacing z by λ),

β = 2π/λ


Sunday, October 9, 2011

Multi - chip in a package technology

When a designer has a system that is designed with chips with differing voltages, currents, frequencies and special characteristics it is difficult to integrate the system for cost or size reduction. In this case the usual approach is a motherboard - daughter board combination. ( Usually, but not always). Recently it appears that designers are turning to multi-chip in a package technology. In this case a package is used which has die in it assembled in a vertical configuration or a side by side combination. Properly done , this can be a powerful way of getting the job done in a shorter time with less cost than a difficult integration approach. The design of the multi-chip configuration is the key. Some parameters to be considered seriously are temperature effects, parasitic connections, grounding, and frequency performance. Signal Processing Group Inc., is offering a multi-chip in a package design and assembly service for interested users. SPG website is located at

De-embedding in high frequency measurements

High frequency measurements for circuits such as MMICs and high speed digital circuits are made using some kind of Vector Network Analyzer ( VNA) or some kind of TDR instrument. In most cases the DUT ( device under test) is mounted on a test fixture which probably has an input connector and microstrip and an output connector and microstrip. The measurements are to be made on the characteristics of the DUT. To do this the test fixtures have to be de-embedded. This technique and its basics form the subject of the latest brief paper from the technical team at Signal Processing Group Inc. It can be found at in the Engineer's Corner.

Saturday, October 8, 2011

Note on bondwire fusing current paper

This is a note to confirm a reference quoted in an article in engineer's corner on bondwire fusing current. The complete reference should read, J. Thomas May, Electrical Overstress - Electrostatic Discharge Symposium 1994.

Useful identities for bipolar design

Bipolar design has been popular for a very long time. It continues to provide a device that is being used today in various forms. In standard bipolar processes, in combination with CMOS in BiCMOS processes, in high current designs, in high voltage with high current designs. Technology and device vendors keep improving their technologies and processes. Recently the advent of SiGe technology also provides a very high performance bipolar device. For the design engineer a set of identities which provides a way for simple hand calculations of the bipolar device for use in a circuit can be useful. Ultimately, the circuit design can be either breadboarded or simulated to evaluate performance. However, hand calculations can be and should be a first step. To facilitate this process the technical team at Signal Processing Group Inc., have recently released a brief paper on some useful bipolar design identities. This is available on the SPG website in the " Engineer'corner". Please visit

Saturday, September 17, 2011

Analog and mixed signal design: Some useful identities for MOS IC design

Powerful simulation programs for circuit simulation and IC simulation can be used today to simulate IC circuits to the nth degree. However when the initial design is done it is usually done using some identities for the various DC and AC parameters of the MOSFET. This allows fast hand calculations and possibly a sanity check of the results obtained from the more complex models built into the simulator. A set of these identities can be found on the SPG website under engineer's corner. Interested readers can access these at > engineer's corner.

Friday, September 9, 2011

The MOS Varactor: An introduction

In many IC designs frequency based trimming or control is required. For instance a filter may need to to be trimmed for corner frequencies. A PLL VCO needs to be controlled by changing the frequency based on its feedback signal. An adaptive equalizer needs to shift its pole-zero configuration. These and many other related applications need a device to be voltage controllable, and offer a change of reactance. The varactor is a useful component that is used frequently to do this. In general varactors are assumed to be junction type devices where the depletion capacitance can be changed to vary the reactance. In CMOS or BiCMOS processes another type of varactor is available, almost as a byproduct of the MOSFET structure. This is the MOS varactor It seems that every CMOS process has the capability to produce a MOS varactor. However, although the varactor is available it may have some limitations of Q and sensitivity. In addition CMOS technology vendors do not characterize or optimize their MOS varactors. This is left to those specialized technology vendors who offer high performance or RF type processes. A recent report on the MOS Varactor is available as an introduction at > engineer's corner for interested parties.

Thursday, September 8, 2011

First order filter parameter calculation algorithms

A recurring problem in ac filter circuit design, is the calculation of attenuation at a particular frequency or conversely the calculation of a frequency given the attenuation. In addition related calculations deal with estimations of time constants and filter parameters such as resistance and capacitance. These calculations play a crucial role in the design of anti-aliasing filters, low pass filters, phase lock loops etc. A paper published recently by the techteam at Signal Processing Group Inc, documents these calculations and provides examples for interested readers, cookbook fashion. The paper is located at >engineer's corner.

Friday, September 2, 2011

An analog frontend for multiple applications

Signal Processing Group Inc, has released an interesting device ( silicon proven and volume production proven ) for use as a mixed signal controller device. At its input is a two/three wire interface ( a clock, data and latching pins) which is used to communicate with a micro-controller and memory of the users choice. The protocol is much like a I2C protocol. The inputs are digital words which drive currents multiplexed into a set of six outputs. These outputs can be used to drive LEDs ( 50 mA each) or other transducers such as pressure sensors, motors, etc . A feedback TIA ( trans-impedance amplifier) is used to capture an analog feedback signal. This feedback signal is converted to a 10 bit digital word ( conversion time is approximately 100us) and sent via the serial interface to a micro-controller for processing. Looking at these functional blocks it appears that the device would be well suited for feedback control of various micro-systems including automatic lighting control, toys, sensor interface, etc. For further information and a detailed datasheet please go to the SPG website at and use the link to proven IP.

Saturday, August 27, 2011

Analog and mixed signal design: Temperature independent resistors

In IC technology all resistor materials have an associated temperature coefficient. Most commonly, resistors are made from polysilicon, diffusion of various kinds and metal. The most common of these resistors is poly and diffusion. In certain applications a temperature independent resistor may be required. In order to do this one has to search the technology properties to see if there are resistor materials in the technology that can provide (1) An appropriate sheet resistance and (2) opposite temperature coefficients. Almost all semiconductor technologies provide this. Once the materials are established a first order temperature independent resistor may be synthesized as shown in a recent report released by Signal Processing Group Inc. This report may be found at:>engineer's corner.

Wednesday, August 24, 2011

Lumped and distributed elements

How does one determine whether to treat a component as a lumped or distributed one? The answer is, that if the the element size is greater than lambda/20, where lambda is the effective wavelength of the signal associated with the element, then it should be treated as a distributed component or element. This means that for typical discrete designs, the lumped approximations are valid for frequencies in the 500 to 1000 Mhz range. For ICs the frequency range is much larger because of the small size of the elements encountered there. This range may be up to 10 Ghz. One has to ask, where did the 5% of lambda come from? It is like most other things in practical engineering an approximation and a thumb rule. It should be considered a guideline. A distributed model is usually more accurate for any frequency above DC but experience says that the 5% guideline is a good transition value. Note: The effective lambda is the lambda in free space divided by the square root of the effective dielectric constant. The effective dielectric constant in homogeneous media is simply the relative permittivity. For non-homogeneous media is not. Usually for non-homogeneous systems such as microstrip the effective dielectric constant is less than the relative permittivity.

Sunday, August 21, 2011

Analog and mixed signal design: A peaking current source

Most of us are very familiar with the Widlar current source which uses a resistor in series with a diode connected bipolar to act as a source for a current. It is probably the most popular current source in existence. However, this source does have its problems such as variations with resistance,
and the low input resistance of the bipolar. There is another lesser known current source known as a "peaking" current source that at times can be used with advantages beyond those offered by the time honored Widlar source. It is also useful when the supply voltages are low. A white paper on this source is available now by courtesy of the Techteam at Signal Processing Group Inc. For interested readers it is located at> engineer's corner.

Friday, August 12, 2011

Analog and mixed signal design:Why 50 Ohm?

Has anyone wondered why we use 50 Ohms as the the reference resistance in so many of our designs. Why 50 Ohm seems to be a defacto standard. We normalize to 50 Ohm; we use 50 Ohm in our oscilloscopes; we pick 50 ohms as a good convenient reference resistor. But how did this happen. Where did this 50 Ohm factor come from. We ran across a explanation which sounds reasonable enough and decided to post it to this blog. Standard coaxial lines in England in the 1930's used a commonly available center conductor which turned out to be 50 Ohms! Others say that for minimum signal attenuation, the transmission line characteristic impedance is 77 Ohm. For maximum power handling it is around 30 Ohm. A good compromise is 50 Ohm for both performance parameters. So this is how 50 Ohm became a convenient impedance level!?

Sunday, August 7, 2011

The eye diagram - a practical aid to the design of systems

The eye diagram is a very useful and practical tool for analyzing, evaluating, diagnosing and correcting errors in digital communication systems or indeed any digital/wireless system. The premise is fairly simple. Using the eye diagram a number of valuable parameters may be extracted at a glance. These parameters play a critical role in the transmission and reception of data. An intuitive understanding of the eye diagram is essential for good design technique and analysis of systems. Simulation of the eye diagram and its measurement can be better understood if one knows the underlying technique of eye diagram construction. A brief expose of this tool can be found at > engineer's corner.

Wednesday, July 27, 2011

PRBS signal power calculations using sinc squared function integration

The power in a PRBS NRZ signal is expressed as a sinc squared function of the independent variable x. In order to calculate the power in this signal from 0 to some arbitrary x a definite integral of the sinc squared function has to be found. This is not an easy task. Having searched the web for ready solutions of this problem very few relevant references were found. Therefore a technique was evolved from series expansions of the sinc and sinc squared functions. The accuracy of the estimates found by using this technique is completely dependent on the engineer. We found that using just four or five terms in the expansion allowed us to calculate to within accuracies of interest to us. The technical report can be found in the engineer's corner in the SPG website located at

Monday, July 25, 2011

Analog and mixed signal design:Input impedance of a common emitter bipolar differential amplifier with emitter degeneration

Use of the emitter coupled bipolar differential amplifier is prolific. In addition a good way to stablize gain and bias stability is the use of a emitter degeneration resistor. This post simply presents, without proof, what happens to the input impedance of the differential device when degeneration is used. First one has to know the rpi of the bipolar small signal model. This is calculated as: Beta0/gm. Where Beta0 is the dc gain of the bipolar. If no degeneration is used, this is the input impedance of the transistor. When a degeneration resistor is used then the impedance rises significantly. The rise in input impedance is: (Beta + 1)*Re. Here Beta is the current gain at the particular bias point and frequency and Re is the degeneration resistor. Therefore the total input impedance rises to rp1+(Beta+1)*Re. For other items of interest please visit our website at

Friday, July 22, 2011

Dot rule for transformers

The dot rule for transformers is a convention used to present the voltage and current relationships and phase. It is a simple rule and therefore sometimes easy to forget, if not used every day. In order to use this rule we need to know two things: (1) The right hand rule for current and fluxes. i.e. If the fingers of the right hand are wrapped around the core in the direction of current flow, then the thumb will point in the direction of the flux. (2)If the current enters a dotted terminal, it causes a positive voltage at the other dotted terminal.If a current leaves a dotted terminal, it induces a negative voltage at the other dotted terminal. For more technical articles and items of interest please visit our website at

Wednesday, July 20, 2011

Adjacent channel power ratio ( ACPR)

In multicarrier systems, the carriers can be spaced quite close to each other. When this is the case a quantity referred to as the adjacent channel power ratio or ACPR becomes important. As mentioned above, multicarrier systems have a number of carriers which may generate signals whose power may add in phase. As more tones or signals start interacting, the peak additive power will increase. The average power of these signals may well be within the dynamic range of the system. However, the peaks of power may exceed the dynamic range. This will cause non linear odd - order distortion in the system. When this happens it results in adjacent channel power output or ACP. The ACPR is the ratio of the system output power at an offset frequency with respect to the power of the channel of interest. This can be considered one measure of linearity of a transmitter ( or RFPA). If the transmitter or the PA generates unwanted sidebands at an offset frequency that lies within the passband of an adjacent channel. For a given modulation scheme, the relationship between third order intermodulation products and the ACPR at a given power level is: ACPR = IMR2-tone + 10*log[ n**3/(16X + 4Y)].For a given modulation scheme, the relationship between third order intermodulation products and the ACPR at a given power level is: ACPR = IMR(2-tone) + 10*log[ n**3/(16X + 4Y)]. Here X and Y are given by:

X = (2n**3 – 3n**2 – 2n)/24 + [mod(n/2)]/8.0


Y = n**3 – {[mod(n/2)]/4.0}

All ratios here are in dBc. i.e. the ratio of the two tone intermodulation to signal carrier IMR and ACPR. Check out our website and engineer's corner. Go to

Friday, July 8, 2011

Random signal generation for PSPICE/SPICE

The SPICE programs we use for circuit simulation do not have a direct way to generate random waveforms. i.e. there is no voltage or current source which can be attached to a circuit node and which can generate a random signal for analysis. As a result we had to develop code on MATLAB and C++ to allow us to generate a PWL random waveform of as long a length as required. It is used as a piece wise linear signal and can generate the random signal as required.Please contact us through our website located at for more information about this circuit simulation tool.

Saturday, July 2, 2011

Decimation filters for Sigma - Delta A/Ds

A typical filter used as the pre-decimation filter for an oversampled A/D is the Hogenauer filter, also called the CIC filter. These filters have some advantages which make them particularly suitable for use as decimation filters. In general the output stream from a OSR A/D is a 1 bit high frequency digital signal. The 1 bit signal has to be downconverted in frequency and increased in bit width. This is the fundamental decimation operation. Hogenauer filters offer the following advantages (1) No multipliers are needed. (2) No storage is needed for filter coefficients. (3)Intermediate storage is reduced by integrating at the high sampling rate and comb filtering at a low rate. (4) The structure of the CIC filters is very uniform, using only two basic building blocks. (5) Little external control or complicated local timing is required. (6) The same design can easily be used for a wide range of rate change factors with the addition of a scaling unit. As a result of these advantages Hogenauer filters have been used and continue to be used in overampled systems. A technical report prepared by technical staff at Signal Processing Group Inc. is now available in a series of posts that deal with the Hogenauer filter as well as OSR A/D converters. It was assumed that since the CIC filter is an important component at the backend of an OSR ADC, understanding the design parameters of this filter is essential to the design of the overall OSR ADC. Subsequent posts to this one deal with the details of design for decimation filters. The paper may be found at>engineer's corner.

Wednesday, June 29, 2011

Analog and mixed signal design: The role of the heat sink in thermal management of active devices

As power circuit designs and devices proliferate in products such as LED drivers, HID lighting, motor control and electric vehicles it is becoming important to understand themal effects in active devices. All active devices dissipate power, and power active devices dissipate lots of power. This power dissipation creates heat which must be removed by some means to prevent excessive heat buildup inside a package or module which ultimately would lead to destruction of the appliance, circuit or device. One of the ways devices can be made safer, thermally that is, is the use of a passve heat sink. The role of the heat sink in active device thermal management is explored in a recent report released by Signal Processing Group Inc.'s technical staff and may be found in:>engineer's corner>heatsink.pdf.

Thursday, June 23, 2011

RF and wireless design: Substrates and laminates

Yesterday we spent an absolutely intense two hours in discussions of substrates for RF and high frequency design with a couple of experts. Frequencies from about 1 Ghz to 77 Ghz were in play. The amazing part of the discussions was the level of parameters to be considered, not only in the manufacture of the laminates but also the layout of the interconnect, filters, transmission lines, and heat sinking.For high speed digital the control of the impedance/constant line width was more of a factor, unlike in RF where multiple line widths and shapes are in common use. A multitude of transmission lines are used in a bewildering array of combinations. Other parameters such as the glass weave and its impact on impedance was a discussion worth having. Three laminates emerged as winners for the a large number of applications in design. The venerable FR4 was buried under the the new requirements at 77 Ghz and even at 24 Ghz.The impact of DF and DK ( buzz words of course to be treated in some detail in subsequent posts). The use of materials and their trade-offs were fascinating. The size of the material sold has also gone through revisions and large sizes are now common. Gone are the limits of 18 X 24. The other very interesting issue that surfaced was the role of, and difficulty in, testing of not only devices but also the substrates themselves. The relationships between the thickness and the width of lines changes from the simple expressions we all knew. The difficulty of modeling has increased and very few CAD tools appear to have the capability to do what is needed. Only one CAD tool was mentioned several times as a recommended one for design and modeling at the high performance levels. Some very interesting numbers for insertion loss and actual measured values of permittivity and loss tangents were presented and argued over. Very interesting empirical design equations and data was presented as well. In this discussion the effect of the roughness factor was presented and emphasized. Finally a detailed discussion on the materials of construction such as resins,fillers and reinforcements ended the presentations. In short a very interesting couple of hours. Interested parties may contact us about these subjects through our website at>>contact.

Friday, June 17, 2011

Analog and mixed signal design: Thermal modeling

After a number of mishaps in the design of power ICs and modules with respect to devices blowing up it was decided that we would go back to first principles and understand thermal effects and furthermore use thermal modeling to design better, safer and robust ( with respect to thermal operation) devices and modules. In our attempt to do this we came across many different types of information in the literature concerning thermal design. From the very simplistic thermal resistance and power relationships to fairly complicated thermal models. We also came across thermal modeling software information. This was in the year 2008. We took this information and wrote a brief thermal modeling technical note in the hope that we would have less incidents of thermally caused destructive events. Another interesting result of this was that we were able to set up thermal models of MCMs and devices that were not yet in existence and study the effects of thermals on these to - be devices. These models were built up in MATLAB/SIMULINK and were still fairly simple. We used commercially available thermal modeling software for more complex models. All this effort did help and in the end we were able to meet our thermal design goals in a large number of projects. The initial note was released for publication and now resides in: www/ >> engineer's corner for those interested in thermal modeling or thermal effects. We acknowledge the contribution made to thermal modeling by a number of authors both on and off the web.

Monday, June 13, 2011

USB3 interface design: the K28.5 sequence as a test pattern

In NRZ ( non return to zero ) signaling, a series of 1's and 0's are used. The probablity of occurence of a digit is 50%. As a result of this there is a relatively high probability of getting a long series of 0's or 1's in the signal. The spectrum of such a sequence contains low frequency content. Consequently high frequency transmission design can become difficult. In order to alleviate this problem data encoding or scrambling is used. A typical technique ( used in USB3 for example) uses 8b/10b encoding. In this case, an 8 bit word is encoded into a 10 bit word. The extra bits are added to make the number of 0's equal to the number of 1's in a given bit interval. Additionally this encoding can also be used to improve BER. ( But that is another posting!). For different applications, different types of encoding may be used as well as test patterns. One of the test patterns ( an ubiquitous one) is the
K28.5 pattern. This pattern is a composite of a K28.5+ and a K28.5- bit word and can be described as follows: K28+ = 1100000101 and the K28.5-: ( The inverse of K28.5+)=0011111010. The complete pattern is thus: 11000001010011111010. In USB 3 circuit design, this pattern is encountered often. Please visit our website at and the engineer's corner for other interesting articles on wireline communications.

Saturday, June 11, 2011

RF Amp design 2: Load pull analysis

In the design of RF power amplifiers it is useful ( and important) to know how the output power of the amplifier gets influenced by changes of the the load impedance under varying conditions. In order to get an understanding of this, a useful technique is "load pull analysis". It is a graphical ( usually) technique that uses the Smith Chart to plot the contours of the load impedance for fixed constant powers. It provides valuable information to the engineer/user about the performance of the amplifier for reasons of assessment of the quality of the amplifier, conditions of operation, design fit or various other parameters. A technical article on the technique has been released by Signal Processing Group Technical staff and is available for perusal by interested parties in>engineer's corner.

Saturday, April 30, 2011

Load line analysis for RF power amps

The most basic of analyses is the load line analysis for RF power amps ( or for that matter, any power amp). It is true that we all learned this in our formative years. However, it is equally true that we graduated to high performance complicated CAD programs that do so many things in an invisible manner that we no longer want to know ( sometimes) how the tool go to where it got to. A somewhat similar condition is common in digital ASIC design where the designer no longer needs to know how the logic gate works or what its device level parameters are. He or she simply writes the code that enables the design on a high level of abstraction. A brief expose of load line analysis is presented in a newly released paper by SPG and may be found at under engineer's corner for interested readers.

Friday, March 25, 2011

Analog and mixed signal design: A reduced power capacitive load CMOS clock driver

When designing clock drivers for capacitive loads ( or indeed for any load), using a CMOS inverter type driver, the power dissipation can be large if precautions are not taken to attenuate the direct current that flows from the P or N channel output transistors, when, for a fraction of the drive cycle both the transistors may be momentarily ON.

A simple way to alleviate this problem is to use a non - overlapping clock driver. Such a driver is presented on our website at's corner.
A simple and useful circuit.

Wednesday, February 23, 2011

Half IF spurious response and the second order intercept point

An irksome 2nd-order spurious response called the half-IF (1/2 IF) spurious response, is defined for the mixer indices of (m = 2, n = -2) for low-side injection and (m = -2, n = 2) for high-side injection. For low-side injection, the input frequency that creates the half-IF spurious response is located below the desired RF frequency by an amount fIF/2 from the desired RF input frequency. The desired RF frequency is represented by 2400 MHz, and in combination with the LO frequency of 2200 MHz, the resulting IF frequency is 200MHz. For this example, the undesired signal at 2300 MHz causes a half-IF spurious product at 200MHz. For high-side injection, the input frequency that creates the half-IF spurious response is located above (by fIF/2) the desired RF. Note that high side injection implies that the LO frequency is above the RF frequency and low side injection implies that the LO frequency is below the RF frequency.

The second order intercept point is used to predict the mixer performance with respect to the half IF spurious response. For further details please see the article under engineer's corner/engineering pages in our website at

Saturday, February 5, 2011

More on cable modeling

In our efforts to understand cable models a little better we took some measured data from Analog Devices ( Analog Dialogue, Vol 38, July 2004, "An adjustable cable equalizer combines a wideband differential receiver with analog switches", Johnathan Pearson) and an approximate expression provided by David A. Johns and Daniel Essig,( "Integrated circuits for data transmission over twisted pair channels", IEEE Journal of Solid State circuits, March 1997. A little dated but very useful, since this is one paper that actually had something we could get our hands around and use practically.)We used MATLAB to generate the loss characteristics as per David Johns et al's paper and plotted it along with the Analog Devices data. From this it appears that at small cable lengths there is generally good agreement between the two sets of data. As cable lengths increase the correspondence gets worse and worse. If one has to use the approximate expression, then a correction factor of from 2.5dB to 7.5dB may be needed to get closer estimates using the theoretical approximate expression. The results are in the "Engineer's Corner" on our website at

Bandwidth requirement to pass fast rising digital signals

How wide must the bandwidth be to pass a fast rising digital pulse so that at the output it can still be recognized as a pulse and detected? A common enough question. However sometimes the answer is not so obvious. Common wisdom says a minimum 3dB point of the filter or medium through which the pulse has to transition must be at least 1/pi*tr where tr is the risetime and pi is 3.1415 etc. Upon simulation using a simple RC filter, the result is: (a) The rule is correct. (b) The pulse width and period must be such as to accomodate the rise and fall time of the pulse. (c) The bandwidth may be narrower if the detection threshold can be set higher. (d) If the detection threshold is low then detection errors may occur if the above rules are disobeyed!

Sunday, January 30, 2011

Cable model for high speed data transmission

Looking at cable models, an interesting expression was found. Though approximate and using " typical cable parameters" it can be used to at least get a feel for the challenges of transmitting high speed data down a UTP cable. The expression is: f3db = 2.25E4/(d*d). Here d is the length of the cable in Km. From this an interesting conclusion can be drawn. The bandwidth of this "typical" cable of length 100 Meters is only 2.25 Mhz. At the other extreme, a 1 Meter cable will have a 22.5 GHz of bandwidth. A 10 Meter cable will have a bandwidth of 225 Mhz and so on. So if we need to transmit data at high MHz or low GHz speeds, on a longer cable, then there is no option except to design and use an equalizer. This is of course a whole new ball game with many issues. From an initial assessment it appears that an analog equalizer is the best option here. The design is understandable and the technology is available to implement it in a reasonable amount of cost, time and space on an ASIC as needed.

Sunday, January 23, 2011

Analog and wireless design: Re - visiting average and effective values, voltage, current and power.

Having become accustomed to clicking icons on a simulator to get fast readings of average and rms values, power values etc we were startled to find that we could not easily compute effective and/or average values of arbitrary waveforms to crosscheck simulation results. So we decided it was time to re-visit the so called " text book " definitions and see what we come up with and re-educate ourselves. It is often the bare essentials that seem to trip us up sometimes. So after a fairly close look we documented, in a workbook fashion, our findings, just in case we need them from time to time. This little report is available on our website under engineer's corner/engineering pages. For interested readers it is located at

Saturday, January 22, 2011

Analog and mixed signal IC design: DMOS transistors

We are all familiar with the MOSFET. Some of us are also very familiar with JFETs.
However, there are a number of transistor types that are not so common. One of these is the DMOS transistor or double diffused MOS transistor. In recent years the DMOS transistor has been used more and more to provide high voltage capability to analog and mixed signal IC designers. It is very popular in the design of MEMs interfaces where higher voltages are required. Currents are usually not high. DMOS transistors can deliver higher currents but need a larger size. The tradeoff is obvious. The DMOS structure is an interesting one. For further detailed information please go to our website, and take a look at the DMOS tutorial article in the engineering pages.

Saturday, January 8, 2011

Analog and mixed signal and RF ASIC success factors

An analysis of several success factors in analog/mixed signal/RF ASIC design and manufacture turned up a number of interesting facts. There were many reasons for success that have been already described elsewhere in this blog. However, the interplay of relationships and their impact on the success of ASIC design and development was not touched. Much to our surprise the analysis of over 100 ASIC projects executed in SPG indicated that when significant success was achieved, not only were the obvious success factors present ( see the blog entry) but a key factor was the customer interface. (1) The customer interface was a technical person who was really closely involved in the design from the system side; (2) the technology that was being used to implement the ASIC was an excellent fit to the requirements; (3) the fabrication vendor relationship was strong and close with SPG; then the probability of clear success was over 99% (conservatively). We did not find a single failure in our list of 100 projects when these conditions were also met. ( In addition to the success factors quoted elsewhere in this blog. The very first entry in the blog since its inception). Thus the objective of this entry is to add this success factor to the list. The search for success in the analog/mixed signal/RF ASIC design and development is critical for our success.