News:

SMF for DIYStompboxes.com!

Main Menu

Opamp Clipping

Started by Bill Mountain, March 05, 2012, 01:15:46 PM

Previous topic - Next topic

Bill Mountain

When thinking about opamp clipping, would lowering the supply voltage produce any different style clipping?  Would raising it have any benefits?

I was thinking about CMOS opamps and was thinking that if I used a resistor on the V+ pin to lower the voltage it would clip with smaller signals.  Then if I cascaded small signals that have been clipped with a few different chips with higher and higher voltages (or smaller and smaller resitors on the supply pin) I could get smooth clipping without having too much gain.

gritz

Hmmm... interesting - so you're effectively lowering the supply voltage on the signal peaks. It may need a reistor in both supply lines if you want it to be symmetrical, but asymetris is good here too. You may have to be careful with your signal input voltage though - see if there are any dire warnings on the specsheet about input exceeding supply voltage. Admittedly I'm not a CMOS opamp expert, but I'm all for torturing components in the name of science.  :icon_biggrin:

R.G.

Great minds run in the same ruts.  :icon_biggrin:
Bear with me, I'll show you what I mean.

Opamp clipping is a special case. For all signals in its low/middle frequency range, it's always flat-line, razor sharp transition into clipping.

The fundamental operation of an opamp is to use a very high open loop gain to hide any oddities the amp actually does. In general, "oddities" comes out as "doesn't respond exactly linearly" and that translates to a change in gain, either instantly or with frequency. The math governing feedback says that any burbles are effectively reduced by a ratio of the open loop gain minus the closed loop gain. So for a gain of 100,000 (100db) and a closed loop gain of 10 (20db), nonlinearities and frequency response are reduced by a factor of 10,000. The open loop dominant pole compensation reduces the open loop gain monotonically at -20db/decade from the compensation pole, so the open loop gain declines with frequency; still, it's quite high throughout the audio range.

What happens when the output stage of the opamp reaches its power supply limits is that the output devices can't raise or lower the output voltage any more than they already have. Effectively, their gain starts dropping from (usually) unity to something less. They saturate, one way or another. When they saturate, they go from nearly unity gain to no gain over a range similar to a diode drop, maybe half a volt or so, depending on devices, area, diffusion, phase of moon, etc.

The excess gain in the opamp gets used up in 'hiding' that drop in gain of the output devices in the last little bit of voltage range until all the available gain in the amplifier open loop is used up too. So the open loop gain in the amplifier sections hides any soft limiting the output devices do. When the open loop gain can no longer hide the gain droop of the output devices, the whole opamp's gain is used up: that's all there is and there ain't no more. The output flat lines hard. An equivalent way to look at this is that the roundness of the saturation knee of the opamp is reduced by the open loop gain ratio too; something like being made 10,000 times more sharp in transition from linear to flat/saturated.

All that is why opamp clipping in general is always razor sharp. The feedback hides any nonlinearity till it all happens at once. So, in general, no, you can't soften an opamp clipping for any mode where it works like it's supposed to.

Now to the rut. When I came up with the idea of "how much time does the signal spend in the knee?", I pondered this a lot. You're right, that by limiting a signal at known points one can ensure the size of the signal fed to the next stage. My solution was CMOS inverters.

CMOS inverters, like the CD4066 perhaps, have an open loop gain of maybe 30db, which is lower than an opamp to start with. They have a truly sloppy, wide range of distortion on the + and - sides of the power supply range, and in general have non-symmetrical oddities at + and - sides. My solution to the many-stages/limited gain concept was CMOS inverters run as linear amplifiers with a feedback network to maintain them at a DC level in the middle of the supply, reducing the DC supply to reduce both the DC power dissipation and the size of the signal, and networks on the output of each stage to cut the signal back down by a bit less than the stage amplified it.

So, with a CMOS stage giving me a gain of 30db (about 32x), I would use a split feedback resistor from output to input, with a capacitor shunting the AC to ground so the stage is running open loop for AC signals but self-biasing at about half its power supply. Then from the output, a capacitor to break the DC level, and a resistor divider to cut the resulting signal back down by about, say, 30. A 100mV signal at the input gives a nominal 3.2V output if all is linear. I'd run the CMOS down at around 3-4Vdc, so this is near clipping for the amplifier, and a lot of the signal is in the soft clipping range. How much is there is set by the actual gain of the inverter and by the power supply voltage.

The resistor divider then cuts this signal back to 3.2/30 = 107mV. Next stage gives up to .107*32 = 3.4V. Depending on whether the inverter stage has the power supply to do this or not, it may or may not get there, and may be flat-topped. Let's say it can. So that's then divided down by 30, to 114.1mV as an input to the next stage.

If you iterate stages, you get a gain of G and then an attenuation of D. As long as G>D, the nominal signal level increases slowly, by an amount of G - D (in db) for each stage. The advantage of this is that nearly all the signal is in the soft, sloppy compressing range of the inverter's amplification, and none of that imperfection is hidden behind feedback.

The disadvantage is that (1) it's fragile; each inverter package may have a different open loop gain (2) it's noisy, as the noise of each stage is cascades, as are the resistor thermal noises (3) it's complex, at least in that it uses a lot of parts (4) you have to like the distortion of a CMOS inverter.

But it is an interesting way to go.  :icon_biggrin:
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

Bill Mountain

Anytime I've used a CMOS chip it was to get max gain per stage and I was never really happy.  Thanks for the tips on calculating more reason gain stages!

R.G.

CMOS inverter gain is a bit funny. The effective gain of the inverter changes with the power supply voltage. It's higher at 12-15V power supply than down at the minimum 3-4V.

In this scheme, the CMOS runs at full gain for whatever it has, but the output is then robbed of nearly all the signal size that would have provided as an output by that resistive divider. So you get the open loop distortion of the CMOS stage with only a bit of the added signal size. Furthermore, you know EXACTLY the maximum output signal that can possibly be provided to the resistor divider and the following stage: it's the power supply to the CMOS stage, minus a few millivolts.

You can play games. The P and N channel devices in a CMOS inverter distort differently, so the output has some asymmetry. To prevent that from always being per... er, converted to symmetrical-ish in the next stage, you can run a gain-of-one inverter before going into a second CMOS stage, and re-invert the polarities so the asymmetry gets more, not less. It's not quite this simple, but it's an OK way to think of it to start with.

Note that you have to use unbuffered CMOS inverters. The buffered ones have multiple internal gain stages and are not effectively tamed by feedback resistors.
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.