Você está na página 1de 11

Module 3A: AC Circuits I Frequency, Period, Amplitude and Reactance

I. a.

Overview The joy of resistors and ideal voltage / current sources

When we started this course, we talked about ideal voltage sources, ideal current sources and resistors. With these components, our primary circuit analysis mainstays V and I were independent of time, so all we had to do was solve for the instantaneous, constant, values of those quantities. In many cases we could solve for these quantities directly using Ohms Law, or by using our rules to combine resistors in parallel or in series to produce a circuit that could be analyzed using Ohms Law. In others we had to use Kirchhoffs Laws, whether through the mesh current, branch current, or node voltage techniques, which we could solve using several different algebraic methods (by substituting one equation into another, by doing Gaussian elimination, or using Cramers rule). Finally, we learned several network theorems which enable us to replace a circuit, or part of a circuit, with a simpler circuit, or as the superposition of several simpler circuits, whose analysis was more straightforward. But again, in all cases, there was no time dependence, so if we found the Vs and Is at one moment in time, the same values would hold at all times. b. The joy multiplied: capacitors, inductors and time-dependent circuits

When we introduced the capacitor and the inductor, we dealt with the possibility that V and I could have time dependence, which led to our analyses of RC and RL circuits. In these cases we developed our solutions for the canonical simple RC and RL circuits, all of which involved currents and voltages varying as e-t/ or (1-e-t/) (i.e. either going exponentially from a max value to zero, or from zero up to an asymptotic max value), with = RC for the simple RC circuit, and = L/R for the simple RL circuit. More complex RC or RL circuits in many cases could be simplified, with resistors replaced by an Req (adding them in series, the one over formula in parallel), capacitors replaced by a Ceq (the one over formula in series, adding them in parallel), and inductors replaced by an Leq (adding them in series, the one over formula in parallel). In cases where we could not do series or parallel decomposition of the separate elements, we would have no choice but to write down Kirchhoffs laws, and set up simultaneous differential equations, which turn out to be even more difficult to solve than the simultaneous algebraic equations we got for resistive networks. Though there are general methods of analysis of such equations, they are beyond the scope of this course. Given those limitations, we had to look for shortcuts to analyze these circuits. We can determine the t = 0 and t limits of behavior by replacing C and L with shorts or opens as appropriate. In some cases we could guess the solution for I(t) (the ansatz method), plug it in to our Kirchhoffs differential

equations, and solve for the unknown coefficients. In others where we could not guess the solution at the outset we would have to look for shortcuts using symmetries in the circuit or equations, or take advantage of particular aspects of the circuit that enable us to effectively solve the circuit without having to directly solve the simultaneous differential equations. c. The next step: time-dependent sources!

But through all of this, we assumed that the sources themselves were constant in time. Now were going to take out that one last sacred cow, and ask what happens when the actual sources are timedependent, and providing what is known as alternating current (AC). While it sounds like this would be completely hopeless, it turns out that its not so bad, and that the DC analysis techniques we learned so far can be directly applied, with a few small caveats. The one assumption that we will continue to make is that even though the sources are time-dependent, we will assume that their time dependence is periodic. If this is the case (and certain other limits apply), then if we can analyze the circuits behavior over one period, we can extrapolate that to future times. Periodicity also makes the time dependence (the waveforms) much more amenable to our analytical tools, since all of the time-dependent behavior can be described in terms of just a few parameters (e.g. amplitude, period, frequency, a simple equation describing the time dependence in terms of sines, cosines, or polynomials, for example). II. Periodic voltage sources: Time dependence and waveforms

Let us imagine that we have the familiar ideal voltage source weve dealt with so far, but one with a knob that enables us to dial the voltage up or down on demand. Lets suppose further that this knob can be adjusted automatically, so that the voltage adjusts itself with time, and does so in a periodic way. That is, the knob (and the voltage) follows some pattern over a period of time T, and at time t = T, the voltage has the same value it did at t = 0, and repeats the same pattern over and over again: V(t+nT) = V(t) where n is an arbitrary integer. When this is true, the only thing we need to know is V(t) for 0 < t < T, since all behavior for t < 0 and t > T can be mapped back onto 0 < t < T by the appropriate choice of integer n. Thus by producing a plot of V(t) from t = 0 to t = T, we know everything there is to know about the time dependence of the voltage. With this information in hand, we can start to characterize this time-dependent signal. First, as already implied, the period T (measured in a unit of time, most usually seconds) is a crucial quantity. There is also the inverse of T, known as the frequency (sometimes, cyclic frequency) f, which describes how many times this behavior repeats per second: f = 1/T

where f is measured in s-1, a unit which is almost always referred to as a Hertz (Hz). Some very old electronics people will refer to f in cycles per second or cycles, where 1 cycle = 1 Hz = 1 s-1. And of course all these units can be prepended by our familiar SI prefixes, so T can be measured in ms or s or ns, and f can be expressed in kHz, megacycles, or gigacycles per second. Sometimes physicists will use the Greek letter (lowercase nu) to represent frequency, though well stick with f. For mathematical reasons (which will soon become apparent) it is often useful to express frequencies in radians per second, often referred to as the angular frequency (lowercase omega). Since there are 2 radians in a complete circle or cycle, = 2f so we can easily convert between our three time domain variables , f and T: f = /2 T = 1/f = 2/

If we know any one of those, we can very simply calculate the other two. Now that weve described the periodicity of the signal, we now need to look in more detail at what happens within a single period. Fortunately most electronics applications involve voltage sources with a time dependence following one of three waveforms depicted below.

A square wave is essentially a DC signal whose value changes twice over each period, from +V0 to V0 and back. Typically the time spent at each voltage is T/2, although the signal can spend more time at one value than the other; this is expressed in terms of a duty cycle which is 50% if the signal spends 50% of its time at each value, and different otherwise. A triangular wave is a signal whose value increases linearly with time from zero toward some maximum voltage +V0, then linearly drops back down to zero and then to V0, and back up to zero for the start of the next period. Typically the rise rate and the fall rate are equal, so the waveform spends T/2 on the rise portion of its cycle and T/2 on the falling portion, but it is possible for these to be unequal, leading to a sawtooth wave. Sawtooth waveforms have a number of uses, most famously as the sweep voltages to direct the electron beam toward the screen in a cathode ray tube (CRT), such as a (oldstyle) TV, computer monitor or oscilloscope screen.

Finally, a sinusoidal wave is a signal whose voltage follows the pattern V(t) = V0sin(T) or V0cos(T) (and this is where , the angular frequency, becomes so useful!). The sine and cosine represent the same waveform, but with a 90 phase shift. We used a square wave signal in the previous lab experiment, but from now on, unless otherwise stated, our sources will exhibit a sinusoidal waveform. This turns out to be extremely useful, because sines and cosines essentially return each other as their derivatives, unlike the two other waveforms. Since our fundamental equations for capacitors and inductors involve V, I and their derivatives, a sinusoidal waveform fed into either a capacitor or an inductor will result in a sinusoidal waveform coming out, although there may be a phase shift! It is for this same reason that most generators (which are based on induction!) produce a sinusoidal output waveform, and this is why sinusoidal sources predominate in AC applications (including household power distribution).

III.

Quantifying the waveform: the amplitude

Now that weve agreed that our AC source can be described by a period T (in s), frequency f (in Hz) and angular frequency (in rad/s or s-1, and between which we can readily convert), and have agreed on a particular waveform (sinusoidal), we can try to quantify the amplitude of the output signal using something more specific, and physically meaningful, than the V0 we used in the previous section. The most obvious way to quantify the waveform is to specify the peak value of voltage, Vp. In the case of a square wave, the voltage oscillates between +V0 and V0. In the case of a triangular wave, the voltage linearly climbs from V0 to +V0 and back down. In the case of a sinusoidal wave given by V0cos(t), the maximum voltage achieved is V0. So in all cases Vp = V0. We may also quantify the amplitude in terms of the maximum difference between voltages that can be developed, known as the peak-to-peak voltage, or Vpp. In all three cases, V(t) varies from V0 to +V0, so Vpp = +V0 (-V0) = 2V0. But, Vp and Vpp represent only the maximum voltages reached, and are not necessarily representative of the whole waveform. What wed like is some measure of a typical or average voltage that is delivered over the whole period. Well, the cheeky response is that if V(t) > 0 for half the period, and V(t) < 0 for the other half, and the time dependence is symmetric for both sides, then the average voltage is precisely zero! And yes, the cheeky response is right: for a purely AC source (with any waveform), the average voltage over an entire period is zero. But this cheeky response, though correct, doesnt really tell us anything useful you can certainly get a heck of a shock from a 110V AC source, even though the average voltage is zero. Whats more useful is a measure of the average voltage over half the period. To determine this, we imagine a comparison of our time-varying source over 0 < t < T/2 with a constant DC source Vavg, but one whose waveform encloses exactly the same area as the AC waveform (i.e. the pink areas under the curves are equal in the

figure at right. This constant sources waveform has an area under its curve given by A = Vavg*(T/2), where T/2 is the width over which were integrating. The area under the AC waveform is given by V(t) dt, where the integral is done from 0 to T/2. Equivalently, we can write Vavg = <V(t)> = V(t) dt / dt where the <V(t)> represents the average value of V(t) over the time 0 < t < T/2, or, as youll call it in quantum mechanics and other classes, the expectation value of V(t). For a square wave, we can see by inspection (since the rectangles are identical) that Vavg = V0. For a triangular wave, we find that Vavg = V0/2, which is not too surprising. For a sinusoidal wave, well do the integral in radians, using = 1, so T/2 = : A = Vavg* = 0 V0sin(t)dt = V0(-cos()-(-cos(0)) = 2V0 Vavg = (2/)V0 = 0.637V0

This makes intuitive sense: V(t) rises rapidly from 0 to about 1/2 or 2/3 of its maximum value, then sort of hangs around near the maximum value for a while, and then rapidly drops off toward its negative maximum, so its reasonable that Vavg will be somewhere between 1/2 and 2/3 of V0. However, theres still one more issue: how does the power delivered to a resistive load by an AC source compare to that delivered by a DC source? Clearly the maximum instantaneous power delivered, when V(t) = V0, will be Pmax = V02/R, but there will also be times when V(t) = 0, so the instantaneous power will be P = 0. Well, what is the average power delivered to the load? And if we can determine the average power, we may define some effective voltage of an AC source, which when used to quantify an AC source describes one that delivers the same average power to a load as a DC source with the same voltage VDC. We may make a first guess that this occurs when VDC = Vavg, which is a good guess but not quite right. Unfortunately, we actually have to do the math (again, with = 1 so T/2 = ): Pavg = ( 0T/2(V2(t)/R)dt ) / ( 0T/2 dt ) = (1/R) 0 V02sin2(t)dt / = V02/R*(1/2) = = Pmax since 0 sin2(t)dt = /2, and where we define a new quantity Vrms such that Pavg = Vrms2/R = Pmax = V02/2R Vrms = sqrt(Vrms2) = V0/sqrt(2) = 0.707V0

where Vrms is called the root mean square or RMS voltage, and the name comes from the fact that Vrms is given by the square root of the mean value of the voltage squared. Vrms is close to, but not exactly equal to, Vavg. And its worth remembering this derivation, because rms quantities come up very frequently in mathematics and physics.

Despite its rather obscure origin, in practical work Vrms turns out to be by far the most useful of the four voltage specifications we have (Vp, Vpp, Vavg and Vrms), because an AC source with a given Vrms will deliver the same average power (and thus energy over some length of time) to a resistive load as a DC source with a VDC of equal value. The 110V or 120V that comes out of our wall outlets is actually Vrms; Vp, Vpp and Vavg have some other values which you can calculate: Vp = sqrt(2) Vrms Vpp = 2sqrt(2) Vrms Vavg = Vrms*(2/)*sqrt(2) = 0.901Vrms

For curiositys sake, one can show that for a square-wave source, Vrms = Vavg = Vp = V0, and for a triangular-wave source, Vrms = V0/sqrt(3). But the most important thing to remember is that there are four measures of AC voltage: Vp, Vpp, Vrms and Vavg, with Vp = V0 = Vpp, and for a sinusoidal wave Vavg = 0.637V0 and Vrms = 0.707V0.

IV.

Resistors in AC circuits

Now, lets consider what happens when we hook a resistor up to an AC source. Ohms Law still applies, as always, but with a caveat: since V(t) is variable, the instantaneous current drawn is equal to the instantaneous voltage of the source, divided by the resistance: V(t) = I(t) *R I(t) = V(t) / R

That is, at any time t, if the value of voltage is V(t), then the current through that resistor will be I(t)=V(t)/R. If a moment later, at time t, the voltage has some other value V(t), then the current will be given by I(t)=V(t)/R. Resistors have no memory, and do not care what the voltage was in the past or will be in the future, only what it is at the present instant. (We will see that capacitors and inductors are quite different!) Now, if V(t) = V0cos(t), I(t) = (V0/R)cos(t) = I0cos(t), where I0 = V0/R, we can cancel out the timedependent cosine term to come up with an AC Ohms Law V0 = I0R although we have to be conscious that the actual voltages (and currents) are varying with time, and that V0 and I0 are just metrics (describing the amplitude of some time-varying waveform) which we defined for convenience. The familiar proportionality between V and I in Ohms Law still holds, but the actual V and I values are continuously changing. Next, recall that we defined four measures of voltage, all of which can be readily calculated from V0: Vp = V0, Vpp = 2V0, Vavg = (2/)V0 and Vrms = V0/sqrt(2). So our AC Ohms Law can be rewritten in terms of any of those quantities with equal validity, so we can define Ip, Ipp, Iavg and Irms as follows: Vp = IpR Vpp = IppR Vavg = IavgR Vrms = IrmsR

All four forms of this AC Ohms Law are equally valid: given a voltage defined using any of the four measures, we can find the current (of the same type) by simply applying Ohms Law. And if we know, say, Iavg, we could compute, say, Ipp using the same relations for voltage, in this case Ipp = 2*(/2)Iavg. We just have to be consistent if we use Vrms, Ohms Law gives us Irms. However, (and this is crucial!) it is NOT correct to use P = VI and say that the power is given by Vavg Iavg, or Vp Ip, or Vpp Ipp. In an AC circuit containing resistors, P =VI is only a valid if we are very careful about our definitions, since V(t) and I(t) are both actively changing with time: P(t) = V(t)*I(t) Pavg = 0T V(t) I(t) dt = Vrms Irms instantaneous power average power

(And, when we start talking about capacitors and inductors, even Pavg = Vrms Irms is not true, since capacitors and inductors dont dissipate power, as resistors do, but merely store it for later re-delivery back into the circuit.)

V.

Capacitors in AC Circuits

So far, so good. Resistors in AC circuits work just as we would expect them to, although we have to be a little careful in using consistent definitions (peak, peak-to-peak, average or rms) of V and I, and very careful in talking about power (instantaneous power, and average power by actually integrating V(t) I(t), or P = VI using only the rms definitions of voltage and current). Lets see how we fare with capacitors. Recall that the fundamental equation for capacitors is Q = CV whose time derivative is I = C dV/dt Now, if V(t) = V0cos(t), it follows that dV/dt = -V0sin(t). Thus If V(t) = V0cos(t) I(t) = -CV0sin(t)

If were not all that interested in the time-dependent stuff, we can relate the amplitudes of V and I as I0 = CV0 = V0/XC so that V0 = I0XC Vp = IpXC Vpp = IppXC Vavg = IavgXC Vrms = Irms XC

which all look reassuringly like our familiar Ohms Law, but with R replaced by XC = 1/C, known as the capacitive reactance, and measured (like R) in ohms.

So, under AC, instead of all that business with charging and discharging circuits, time constants and exponentials, capacitors act very much like resistors, with a linear relationship between voltage and current! So, it seems that our lives have become much simpler! And, indeed they have. However, there are two very important caveats to be aware of (aside from the fact that I frequently end my sentences in prepositions): First, note that XC (which is our stand-in for resistance) = 1/C. That means it is dependent on frequency. So we cant go to the store and buy an XC = 50 capacitor. All we can buy are capacitors rated in Farads; XC is not determined until we decide what frequency to use with our source. Further, as the frequency changes, or we have signals with different frequencies present in the same circuit, the value of XC they see varies. This can make our lives difficult (having to re-calculate XC for different frequencies) but is also exceedingly useful, since this property enables us to distinguish between different frequencies (i.e. send the high-frequency signals one way, and the low-frequency signals another way, using the fact that different-frequency signals see a higher or a lower XC), making it possible to create high-pass and low-pass filters. Further, when we add inductors into the mix (which have a complementary frequency dependence), we can even make band-pass filters, which will pass only signals with a certain frequency range, and reject everything else; this is how your TV and radio select which station you want to tune into, and reject all other signals! (However, as Ive previously implied, electrical engineers have come up with some pretty ingenious ways to avoid using inductors) Second, note that while V(t) ~ cos(t), I(t) ~ -sin(t). This means that V and I, while their amplitudes are related simply by XC in that Ohm-like law, are 90 out of phase with each other. In particular, the time dependence of I(t) leads the time dependence of V(t) by 90. This phase shift (I leads V/) comes about because I(t) = CdV/dt, and is intrinsic to the way the capacitor works. Finally, lets calculate the power delivered to a capacitor: Pavg = 0T V(t) I(t) dt = 0T V0cos(t) (-V0/XC)sin(t) = 0 since the integral of sin(t)cos(t) over one period is zero. This means that, on average, zero power is delivered to a capacitor, no matter what the voltages or currents may be! The physical interpretation is that the capacitor is alternately storing and releasing electrical energy as it charges and discharges with each cycle, which is very different from the continuous energy dissipation of the resistor. VI. Inductors in AC Circuits

Now that weve determined the behavior of resistors and capacitors under an AC source, lets see what happens when we hook up an inductor to an AC source. The fundamental equation for an inductor is V = L dI/dt so if V(t) = V0cos(t), then dI/dt = V(t)/L = (V0/L)cos(t). It follows that

I(t) = dI/dtdt = (V0/L)cos(t)dt = (V0/L) sin(t) As we did before, we can remove the time-dependent portions by using one of our four amplitude values, and define XL = L as the inductive reactance in ohms, to get V0 = I0XL Vp = IpXL Vpp = IppXL Vavg = IavgXL Vrms = IrmsXL

Therefore, inductors also follow the same Ohm-like law under AC circuits as capacitors, so their behavior can be modeled similarly to how we dealt with resistors in DC circuits. However, the same two caveats as for resistors apply: First, XL = L, so XL varies with frequency. Therefore we cant go out and buy an inductor for a specific XL value, only a specific L value (in Henries), and calculate XL based on the frequency being used. XL will vary with frequency, which is a pain since you have to recalculate the value of XL as frequency changes, but is also extremely useful in that it allows frequency-distinguishing filters to be devised. Secondly, note that while V(t) ~ cos(t), I(t) ~ +sin(t). This means that V and I, while their amplitudes are related simply by XC, are 90 out of phase with each other. In particular, the time dependence of I(t) lags the time dependence of V(t) by 90. This phase shift (V/ leads I) comes about because I(t) = LI(t)dt, and is intrinsic to the way the inductor works. Finally, lets consider the power delivered by an AC source to an inductor: Pavg = 0T V(t) I(t) dt = 0T V0cos(t) (V0/XL)sin(t) = 0 Again, this means that, like the capacitor, the inductor is dissipating no energy, only alternately storing and releasing energy over each period. That is, zero average power is delivered to capacitors and inductors, and the instantaneous power oscillates between positive and negative over each period, as the source alternately charges the capacitor and creates an E-field between its plates, or the flowing current generates a B-field inside the inductor. So, unlike a resistor, which continuously dissipates power by converting electrical energy to heat energy, capacitors and inductors only periodically store and release electrical energy. We will revisit this concept later when we talk about the power factor and reactive power, since resistors and capacitors/inductors work very differently when it comes to power!

VII.

Phase shift mnemonic

There is a simple mnemonic to remember whether the current leads the voltage, or the voltage leads the current. It is: ELI the ICE man

That is, for an inductor (L), ELI: the voltage (E) leads (is ahead of) the current (I). For a capacitor (C), ICE: the current (I) leads the voltage (E).

VIII.

Combinations of inductors, resistors and capacitors

Now that we know how to deal with circuits in which an AC source is connected to a single component (R, L or C), lets see what happens when we have multiple devices of the same type. When resistors are connected in series in a DC circuit, we know that they can be replaced by a single Req = R1 + R2 + , and this continues to hold true in AC circuits. When they are connected in parallel, we use the one over formula 1/Req = 1/R1 + 1/R2 + When inductors are combined, we recall that the formulas were the same as for resistors: in series Leq = L1 + L2 + , and in parallel 1/Leq = 1/L1 + 1/L2 + Since XL = L, XL should transform the same way: XLeq = Leq = (L1 + L2 + ) = XL1 + XL2 + In parallel, 1/XLeq = 1/(Leq) = (1/)(1/L1 + 1/L2 + ) = 1/XL1 + 1/XL1 + 1/XL2 + Capacitances were combined in the opposite way: in series, 1/Ceq = 1/C1 + 1/C2 + , and in parallel Ceq = C1 + C2 + Now, since XC = 1/(C), in series we find that XCeq = (1/)(1/Ceq) = (1/)(1/C1 + 1/C2 + ) = 1/(C1) + 1/C2) + = XC1 + XC2 + In other words, series capacitive reactances (like series inductive reactances and series resistances) add. In parallel: 1/XCeq = Ceq = (C1 + C2 + ) = C1 + C2 + = 1/XC1 + 1/XC2 + So, in parallel, capacitive reactances (like inductive reactances and resistances) combine using the one over formula. An alternate way of describing the combinations of reactances in parallel is analogous to the use of conductance to describe the combination of resistors in parallel. Instead of using the one over formula to combine resistors, we rewrite each as a conductance G = 1/R, from which we find that Geq = G1 + G2 + In analogy, we can define the capacitive susceptance BC = 1/XC, and the inductive susceptance as BL = 1/XL, and in parallel, these susceptances combine as: BCeq = BC1 + BC2 +

BLeq = BL1 + BL2 + where susceptances, like conductances, are measured in mhos or Siemens (1 S = 1-1), with BC = C and BL = 1/(L). While the use of susceptance and conductance has been more of a mathematical novelty thus far, it will prove to be vital when working with parallel RLC circuits!

Você também pode gostar