Você está na página 1de 22

Autonomous generation of BER vs.

Eb/No curves

Allan Freeman Code 5554, Naval Research Lab Satellite and Wireless Technology Division 4/29/2011

The primary goal of this research paper is to showcase Matlab code that will autonomously measure the BER vs. Eb/No curves of a communication system. The secondary goal of this research paper is to highlight the need for the inclusion of certain statistical parameters when determining Bit Error Rate vs. Eb/No performance curve, in a laboratory setting. Background information Signal to Noise Ratio (SNR) is a term that signifies the fidelity of the signal in the presence of noise. To determine SNR we first measure the power of the signal of interest, turn off the aforementioned signal and measure the power of the noise over an identical bandwidth. The ratio of the two measurements is the SNR. SNR is relevant primarily when using an analog modulation scheme (AM, FM, etc..). If the modulation is digital a derivation of SNR is used to relate the energy per bit to the noise (Eb/No). Eb/No is defined as the ratio of the Energy per bit over the noise per Hz. It is a unit less quantity most often referred to in units of dB. Eb/No can be calculated directly from the Signal to Noise Ratio (S/N) using the equations 1, 2 and 3.
(1) (2) (3)
Eb=Energy per bit (Joule) No=Noise per Hz (Watt * s) S=Signal Power (Watt) N=Noise Power (Watt) Rb=Information bit rate (bits per seconds i.e. Hz) W=Bandwidth of the signal (Hz)

Take note that Eb refers to energy per bit of user available data and does not include the code bits inserted by FEC into the data stream (FEC and code bits will be discussed below). Eb/No is measured at the input to a demodulator and is a basic measure of the quality of the signal in the presence of noise. Eb/No is an important specification when calculating link budgets and comparing the performance of various waveforms and modulation schemes. Bit Error Rate (BER) is a ratio of the number of bit errors over the total number of bits passed during a period of time. A bit error is defined as a receive bit of a data stream over a communication channel that has been altered due to errors induced by noise, interference, poor modulator design etc. BER is a unit less measure and is commonly referred to as the probability of bit error (pe). BER is directly related to Eb/No and can be calculated using error performance curves (BER vs. Eb/No curves).

Figure 1: Eb/No vs BER curve (SDM300, 258)

The performance curves are utilized when determining the minimum Eb/No for the desired BER performance. Figure 1 is a performance curve for the SDM300 family of satellite modems. For example if the requirement from the customer was QPSK modulation, a maximum BER of 1*108 and a Reed Solomon code, then the Eb/No required would be ~ 4.8 dB. This Eb/No required is then used to determine the specifications needed by the remaining pieces of the communications system (RF electronics, antennas.). This paper discusses a method of determining the performance curves with a confidence level attached. The protocol of the digital data to be passed using the communications system will dictate the maximum BER. This maximum BER refers to the highest amount of error a user can experience without a noticeable disruption of service often refer to as Quasi Error Free (QEF) performance. Another term for QEF is BER threshold. Refer to Table 1 for maximum BER needed for certain data sources.
Table 1: Maximum BER

Data Protocol (reference)

Maximum BER

Voice cell phones (Green, 316) 1*10-3 DVB-S2 MPEG2 video, UDP packets (ETSI, 34) 1*10-7

Voice communication can operate at such a high BER due to the human brains ability to add an extra layer of error correction. The human brain attempts to determine what the person on the other end of the line said without any direct knowledge of what was said. This attempt by the human brain to determine what was communicated without direct knowledge of what the person said is similar to Forward Error Correction (FEC) in digital communications systems. FEC allows a modem to correct a portion of the errors present in the demodulated data without having any direct knowledge of the actual data being sent. FEC works by adding code bits onto the modulated data, then using the code bits at the demodulator to correct a portion of the errors detected. A higher order FEC code (more code bits) will allow for lower Eb/No requirements but will increase the bandwidth of the signal; bandwidth is directly related to the symbol rate which is in turn directly related to the total data rate (code bits + data bits). The Bit Error Rate of a system can be tested using a pattern generator and an error detector. The pattern generator transmits a defined digital pattern to the Device Under Test (DUT). The error detector is connected to the digital output of the demodulator and synchronizes the received signal and an internally generated pattern; the two are compared to generate the BER. A simple BER test setup is shown in Figure 2. The channel refers to the communications channel and consists of the media present between the output of the modulator and the input to the demodulator. In a laboratory setting the channel may consist of coaxial attenuators and a noise generator. In the field, the channel would consist of anything present between the transmit and receive antenna. In the case of the setup shown in figure 2, the DUT is a modem which is modulating the data from the BER tester and then transmitting the modulated data to the channel. After the modulated data passes through the channel, it is demodulated using the DUT and the receive bit pattern is sent to the BER testing for comparison with the known pattern.
Bit Test Pattern DUT digital data Pattern generator
Figure 2: BER test setup

Modulated Data

Received Bit Pattern DUT

Channel digital data Error detector

In digital communications when working with data that is transported in a UDP, TCP/IP or serial packet format (any packet that contains a checksum), the Packet Error Rate (PER) has greater relevance than BER. The system receiving the packetized data will verify the check sum of each packet; if it is incorrect the packet will be discarded. The next step will be to notify the sender of the incorrect transmission of the packet if Automated Request Query is enabled (ARQ). ARQ is an error control method for data transmission that uses acknowledgements to confirm correct receipt of a packet, often used in TCP/IP. UDP and serial communication natively do not have ARQ enabled, this leads to the corrupted packet being discarded and no attempt made to notify the sender that the packet was corrupted. The packet error rate can be found using:

(4)
pp=packet error probability (PER) pe=bit error probability (BER) N=packet length (bits)

For a small pe we can use the approximation (assuming all errors are independent)
(5)

Logically we can decrease the packet size to reduce the packet error rate. This is true in theory but in practice a reduction of the packet size leads to an increase in the percentage of the bandwidth occupied by headers and checksums. A header is data added onto a packet to detail the origin and destination of the data contained within the packet, each packet has a header and checksum attached. The size of the header will depend on the communication protocol; UDP for example contains a 24 byte header. If the packet size is set at 24 bytes when using a UDP header than half of the data transmitted are headers and user data. This is another area that the designer has to be careful when determining a packet size. If the packet size is too small then a large percentage of the data will be header data which is meaningless to the user. If the packet size is too big then the packet error rate becomes too high to maintain Quasi Error Free performance. Additive White Gaussian Noise (AWGN) The test setup described below includes noise impairment to the communications channel. This is accomplished by combining output from an Additive White Gaussian Noise (AWGN) generator with the modulated signal. White noise is defined as power density is constant over a finite frequency range. AKA Johnson noise. (Wisniewski) In the test setup the noise must be combined with the modulated signal after the signal has been passed through the attenuator. This location of the injection of noise will allow the user to vary the Eb value while maintaining a constant No value. This AWGN is uncorrelated to any modulated signal present; this fact helps to simplify the equations used (for example the matched filter). AWGN is a linear addition of wideband white noise with a constant spectral density and a Gaussian distribution of amplitude. One major detraction of using only AWGN as impairment is that it will not test the system against impairments such as fading, multipath and LOS blockage. The test setup in Figure 3 showcases how to use an AWGN generator to add noise impairment to a communications channel while testing in a laboratory environment.

Modulated Data
Digital Variable Attenuator (032 dB, in 0.5 db steps) RF Combiner (summer)

X(t)

Y(t)=f(t)+n(t) To Receive DUT

From Transmit DUT f(t) n(t)

Computer

AWGN Generator

Figure 3 Addition of the noise impairment

To illustrate the simplification of calculations due to the use of uncorrelated noise we can describe the use of a matched filter. In communications a matched filter is obtained by correlating a known signal with an unknown signal. The known signal is the signal transmitted by the communications system, the unknown signal being the signal received by the communications system.
X(t)=Known signal Y(t)=Unknown signal=f(t)+n f(t)=received signal of interest n(t)=uncorrelated AWGN Rxy=correlation of X(t) and Y(t)

(6) (7)

Due to the noise being uncorrelated:


(8)

Thus
(9)

Equation 9 only holds if the noise impairment is uncorrelated.

The Law of Large Numbers and application to BER The law of large numbers is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a larger number of trials should be close to the expected value, and will tend close to the expected value as the number of trials approached infinity. Equation 10 (Papoulis, 211) details the mathematical equations of the law of large numbers. (For the purpose of this paper the values will be referenced to their related the values with respect to BER)
(| | ) (10)

n=Total number of independent events (bits passed) k=Total number of certain events (bit errors) p=the true probability of bit error (expected value) =arbitrary value with the constraint ( >0)

The law of large numbers assumes independent identically distributed (i.i.d.) events. Direct application of the theory of large numbers to real world events is impossible due to the requirement that the number of events approach infinity. Take note that if anyone tells you they are 100% confident in something then they are nave or ignorant due to the fact that it is impossible to measure/record an infinite number of events. If someone were to say I have confidence approaching 100% then that statement would abide by the law of large number and not render the person a liar. To apply confidence percentages and the law of large numbers to real world scenarios we can utilize the Tchebycheff inequality shown in equation 11 (Papoulis 114) | |

(11)

x=the value of the random variable (average BER measured) =expected value of the random variable x (BER expected) =arbitrary value with the constraint ( >0) (confidence interval) 2 =variance of the average of the random variable x (variance of the BER)

The proof for the Tchebycheff inequality can be found in Papoulis 3rd edition page 114. For example if 2=0 is assumed then:
| | (12)

Which is expected because a variance of 2=0 defines that the values of a random variable are always equal to the expected value (). The definition of variance is the concentration of a random variable around its mean. If this happens, than the value inside the absolute value is 0 which is never greater than . Thus equation 11 holds. One benefit of using the Tchebycheff inequality is that it holds for any probability density function (f(x)) therefore can be used if the probability density function is not known.

The Tchebycheff inequality can be manipulated to allow for application in determining the probability of bit error. The complement of the Tchebycheff inequality is shown in equation 12 (Papoulis 212) | |

(13)

We can now define the confidence percentage as: | |


(14)

Confidence percentage is the percentage of the events for which the difference between the random variable and the expected value is less than the confidence interval. Transmission of digital data can be treated as a Bernoulli trial, which is defined as an experiment whose outcome can be one of two possible outcomes, success or failure. The data bit can either be passed correctly i.e. success or incorrect if there is an error in the transmission i.e. failure (bit error). If we have a discrete random variable (Bernoulli trial) then Papoulis defines the variance of the random variable x as (Papoulis, 212):
(15)

p=probability of success q=probability of failure

The variance of the average of the random variable x is:


(16)

N=Number of events

This results in the following equation: (17) Solving for the number of events to obtain the following equation: (18)

If , then we can solve for the number of trials needed to ensure the measured probability of error is within the confidence interval the confidence percentage of the time. Each bit is considered one independent event, to solve for the
8

time needed to obtain a certain confidence percentage, just divide the number of events by the user available data rate.
(19)

The equation above assumes a constant data rate throughout the test. Take note of the omission of the probability from the above equation. Probability can be included by scaling the confidence interval based on the expected value of the BER.
(20)
CIP=confidence interval percentage

To show the importance of scaling the confidence interval consult Figure 4 & 5. The confidence interval remains a constant in Figure 4. In Figure 5 the confidence interval is determined using equation 20 and a CIP of 20%. Take note that due to the range of BER the yaxis is a logarithmic scale, the x-axis is already in dB so its linearly scaled. To simplify the graphs, only the portion of the confidence interval greater than the number is shown. The BER values were determined using equation 21, which applies only to a QPSK modulation whose only impairment is AWGN.
(21)

1 0.01 0.0001 0.000001 BER 1E-08 1E-10 1E-12 1E-14 1E-16 0 0.5 1 1.5 2 2.5 Eb/No (dB) 3 3.5 4 4.5

Figure 4: Performance Curves with confidence interval held constant at 5e-6

1 0.01 0.0001 0.000001 BER 1E-08 1E-10 1E-12 1E-14 1E-16 0 0.5 1 1.5 2 2.5 Eb/No (dB) 3 3.5 4 4.5

Figure 5: Performance Curves with confidence interval at 20% of BER.

When the confidence interval is held at a constant value as the BER decreases, the confidence interval begins to overlap with BERs of adjacent Eb/No values. To maintain stricter control over the confidence interval we can apply equation 20, as seen in Figure 5. This application leads to decrease in the likelihood that the confidence interval will overlap with the adjacent Eb/No BER value. The cost of this tighter control is a longer total time needed, given that the total test time and CI value are inversely proportional. To reduce the total time needed when applying equation 20 we can use the fact that the user will not notice a difference in the data as long as the BER is below the BER required for QEF (defined as maximum BER). This means that when the BER is below the QEF requirement the confidence interval can be expanded to encompass all BER values between it and the QEF requirement. This statement can be seen mathematically in equation 22. If the BER is greater than the QEF we can use equation 20. If the BER is less than the QEF requirement we can apply equation 21.
(22)

10

When testing communications systems in a laboratory environment the total time available for testing is the major determinant for the optimal confidence level and percentages. Equation 19 shows that the confidence interval and total test time are inversely proportional, where as the confidence percentage and total test time are directly proportional. This paper has illustrated 3 independent means by which to determine the confidence interval. Each process is shown below: A. Applying a constant confidence interval, i.e. the confidence interval does not vary with the BER a. This process is independent of the BER and is directly related to the number of measurements taken (i.e. unique Eb/No values) B. Applying a percentage confidence interval, i.e. the confidence interval is a certain percentage of the BER a. Unless the percentage is a unreasonably high number this process will take the most time thus it will be ignored. C. Applying a constant confidence interval when the BER is above the maximum BER, and applying equation 22 when the BER is less that the maximum BER requirement. a. This result of this process will be a confidence percentage that the value is within a constant confidence interval, or confidence percentage that the value is below the maximum Eb/No (QEF requirement) Tables 2 illustrates the BER values shown in Figure 1 for a QPSK, 1/2 turbo code shown in Table 2. These values will be used in the total test time calculations. The data rate is assumed to be 512000 bps.
Table 2 SDM300 performance curve QPSK, 1/2 rate turbo

Eb/No (dB) 3 3.5 4

BER 1e-6 1e-8 1e-9

Table 3 the total test time when using process 1. Confidence interval= 2e-4, 3 unique Eb/No values. (3,3.5,4)

Confidence Percentage (%) 99 95 90 75

Total test time (s) 3660 732.42 366.21 146.48

Total test time (hours) 1.01 0.2 0.101 0.0406

11

The application of process 1 to table 2 yields table 3, the total test times are reasonable.
Table 4 the total test time when using process 3. The minimum BER required is 1e-6. Data Rate 512000 bps.

Confidence Percentage (%) 99 95 90 75

Total test time (s) 1.1e10 2.214e9 1.1e9 4.42e8

Total test time (days) 127310 25625 12731 5115

The values for total test time in Table 3 are unreasonable and out of scope when the researchers goal is to verify the operation of a modem. To bring the total test time down to a feasible time frame we can push a high data rate across the modems. (The SDM300 is not capable of data rates above 5 Mbps Table 5 is just an example) Table 5 details the total test time when using process 4. The minimum BER required is 1e-6. Data Rate 100 Mbps.
Table 5 the total test time when using process 3. The minimum BER required is 1e-6. Data Rate 100 Mbps.

Confidence Percentage (%) 99 95 90 75

Total test time (s) 5.632e7 11.33 e6 5.6e6 2.2e6

Total test time (days) 651 131.2 65 26.1

The total test time shown in Table 5 is more realizable in terms of total time than those in table 4. Due to the total test time being inversely proportional to the data rate it is recommended for non high data rate systems (less than ~ 100 Mbps) process 1 is used to determine the confidence interval.

Testing Using equations 18, 19, 20 and 22, Matlab code was used to autonomously generate the BER vs Eb/No curves given the following user input:
12

Confidence Percentage Confidence interval Maximum BER

The Matlab code runs on a PC which is connected to the modem and variable attenuator using a RS232 serial connection. This test setup can be seen in Figure 6.
Digital Variable Attenuator (032 dB, in 0.5 db steps) RF Combiner (summer)

f(t) fa(t)

fa(t)+n

IF out

IF in

EF data SDM300 modem f(t)=modulated signal fa(t)=attenuated modulated signal n=AWGN

Computer

AWGN Generator

EF data SDM300 modem

=Coaxial cable =RS232 serial cable

Figure 6 Performance Curve test setup

The EF data SDM300 modem is a satellite modem capable of data rates up to 5 Mbps. For this test the modem was setup according to the following settings:
Modulation QPSK Data Rate 512000 Kbps Turbo code 1/2 Scrambling on BER 2047 pattern on IF frequency 70 MHz

For this test, to simply the test setup, only one modem will be used. This will not have an effect on the performance curve due to the TX and RX channels acting as separate channels within the modem. The AWGN generator used during this test is a NoiseCom modem # NC6108. In theory it outputs AWGN in the frequency range of 100Hz to 500 MHz. On the front of the unit is a dial type attenuator, for this test the dial is set at 10 dB. The variable attenuator is a MiniCircuits digital step attenuator, whose attenuation can be set from 0 to 32 dB in steps of 0.5 dB it requires a binary TTL input to control the attenuation. The
13

conversion from serial data to binary is done using an ADR2000 Serial Data Acquisition interface. The user sends the ADR a binary value from 0 to 32 and the ADR sends the binary pattern to the correct ports on the digital variable attenuator.

Figure 7 Picture of Laboratory BER test setup

The following is a Matlab flow chart detailing the code written for this paper.

14

Data Recorded/Output to the screen


Check RS232 connectivity

If connection successfull All devices successfull connected Printed to the screen

Determine the Variable Attenuator settings which results in an 0 dB<Eb/No<16 dB Record the: Attenuation settings EbNo values BER A Matrix containing the following data points: (example data inserted)

Attenuation # 3 4
Determine the performance curve using equation 18 and 19 Record the: Attenuation settings EbNo values Average BER Time of the test Total time of the test

Eb/No (dB) 5 4.5 4 3.5

BER 3.00E-11 9.00E-08 3.00E-07 2.00E-06

5 6

A Matrix containing the following data points: (example data inserted)

Attenuation # 3 4 5 6
Graph the Performance Curve and time of each test

Eb/No (dB) 5 4.5 4 3.5

Average BER T ime of T est(s) T otal T ime of T est (s) 3.00E-11 9.00E-08 3.00E-07 2.00E-06 5.00E+06 4.00E+05 2000 10 5.40E+06 4.02E+05 2015 15

Graph of the performance curve

Figure 8 Flow chart of the Matlab code

The full Matlab code can be found in Appendix A.

Matlab Code flow explanation


Step 1 15

Tests the serial connection to ensure that all of the equipment is powered on and connected to the correct serial ports. Step 2 Generates the attenuation settings, Eb/No value and BER table. The SDM modem is only capable of reporting Eb/No values between 0 and 16 dB. Thus the testing will concentrate on the values of the variable attenuator which lead to an Eb/No within the aforementioned range. Along with the Eb/No values a single BER measurement is taken, this value is used when calculating the total time needed to attached a confidence level to a BER value. Step 3 For each Eb/No value the code: Sets the variable attenuation Records the BER value and stores it in a table, repeats until the BER values have been recorded over a predetermined amount of time (determined in step 2) Finds the Average value of the BER For the average BER value and measurement period, confirms that the confidence interval and percentage have been satisfied (using equation 18). If so, moves on to the next Eb/No value otherwise calculates the additional time needed and repeats the above test.

Step 4 Graphs the recorded data and compares it to the expected Eb/No vs BER values as defined in Figure 1.

Results After the Matlab code finishes collecting data it graphs two figures shown in Figures 9 and 10.

16

Figure 9 Matlab code generated Eb/No performance curve

Figure 10 Error between the expected and generated BER values

The results shown above indicate the code ran successfully and due to the error in the measurement being less than 10^-6, it is confirmed that the results are within the specified confidence interval.

Bibliography
Comtech EFdata. "SDM300A manual revision 5." 2 July 2003.

17

ETSI. Digital Video Broadcasting (DVB);Second generation framing structure, channel coding and modulation systems for Broadcasting,Interactive Services, News Gathering and other broadband satellite applications (DVB-S2) V 1.2.1. Sophia Antipolis Cedex - FRANCE, 2009. Green, James Harry. The Irwin handbook of telecommunications, 5th edition. McGraw-Hill , 2006. Papoulis, Athanasios. Probability, Random Variables, and Stochastic Processes. McGram-Hill, 1991. Wisniewski, Joseph. the colors of noise. 7 October 1996. 28 april 2011 <http://www.ptpart.co.uk/colorsof-noise/>.

Appendix A Serial Port Test


%This script is designed to open the serial ports required and test to %confirm everything is conected correctly delete(instrfindall) clc; fprintf('testing the communications with the equipment\n') serialPort_SDM300=('COM6'); serialPort_aten=('COM5'); SDM300_serial=serial(serialPort_SDM300, 'BaudRate', 9600, 'StopBits', 2, 'DataBits', 8); SDM300_serial.Terminator='CR'; fopen(SDM300_serial); a=query(SDM300_serial,'<1/EBNO_\n');

Aten_serial=serial(serialPort_aten, 'BaudRate', 57600); Aten_serial.Terminator='CR'; fopen(Aten_serial); b=query(Aten_serial,'*IDN?'); if ~isempty(b) && ~isempty(a) fprintf('Both devices connected properly\n') end if ~isempty(b) && isempty(a) fprintf('Variable Aten connected properly\n') fprinft('SDM300 not properly connected\n') end if isempty(b) && ~isempty(a) fprintf('SDM300 connected properly\n') fprintf('Variable Aten not properly connected\n') end if isempty(b) && isempty(a) fprintf('SDM300 not connected properly\n')

18

fprintf('Variable Aten not properly connected\n') end clear a b

BER corrected monitor


function [output ] = BER_corrected_monitor( Object ) %UNTITLED Summary of this function goes here % Detailed explanation goes here id=query(Object, '<1/CBER_\n', '%s'); [test rem]=strtok(id,'_'); [id_parsed ~]=strtok(rem,'_'); var=str2double(id_parsed); if isnan(var) [test rem]=strtok(id,'<'); [id_parsed ~]=strtok(rem,'<'); var1=str2double(id_parsed); if isnan(var1) [test rem]=strtok(id,'>'); [place id_parsed]=strtok(rem,'>'); [place1 ~]=strtok(id_parsed,'>'); output=str2double(place1); else output=var1; end else output=var; end

EbNo Monitor
function [output ] = EbNo_monitor( Object ) %This function is designed to communicate with the SDM300 and report the %current EbNo values %The following line report the EbNo if it is within 0 to +16 dB id=query(Object, '<1/EBN0_\n', '%s'); [test rem]=strtok(id,'_'); [a ~]=strtok(rem,'_'); [b ~]=strtok(a,'dB'); var=str2double(b); %If the modem reports no EbNo or and EbNo greater than 16 dB we need %additional parse scripts if isnan(var) [test rem]=strtok(id,'_'); [e ~]=strtok(rem,'_>'); [g ~]=strtok(e,'dB'); var1=str2double(g); if isnan(var1) output=0;

19

return end output=16; return end output=var; end

Variable Attenuator Control


function [ ] = Set_aten( Object,num ) %This function is designed to change attenuation values based upon the %number give in the argument above if num>=0 && num<=63 fprintf(Object, 'CPA00000000'); e=dec2bin(num,8); r=sprintf('%s%s','SPA',e); fprintf(Object,'%s\n',r); else fprintf('Number out of range of attenuator\n') end

Code to generate the EbNo vs BER table


%This script is designed to generate the EbNo values possible with the %SDM300 and variable attenuator combination %column 1 will be the EbNo values possible %Column 2 will hold the attenuation values needed to obtain the certain %EbNo values fprintf('Currently determining the EbNo and BER vs attenuation table\n') EbNo_tab=zeros(63,3); Set_aten(Aten_serial,1); pause(30) x=0; for i=1:1:63 Set_aten(Aten_serial,i) pause(15) EbNo_value=EbNo_monitor(SDM300_serial) BER_value=BER_corrected_monitor(SDM300_serial) EbNo_tab(i,1)=i; EbNo_tab(i,2)=EbNo_value; EbNo_tab(i,3)=BER_value; if EbNo_tab(i,2)==0 x=x+1 if x>=2 break end end

20

end zer=find(EbNo_tab(:,2)==0); if ~isempty(zer) EbNo_tab(zer,:)=[]; end zer1=find(EbNo_tab(:,2)==16); if ~isempty(zer1) EbNo_tab(zer1,:)=[]; end fprintf('Done measuring EbNo vs attenuation table\n') break

Code to generate the avg BER using the Tchebychev inequality


%This script is designed to vary the attenuation values and measure the BER %as fast as the computer can record the data, this data is logged with %respect to time and then compared to the theoretical number of samples %needed to have predefined confidence percentage and confidence interval clc;clear all; load('ebno_table') serial_port_test conf_int=1e-4; conf_per=75; data_rate=512000; minBER=1e-4; %I already have a table of the EbNo, BER and attenuation values not to %parse that table and only use one of the values that has a BER of 1e-12 idx=find(EbNo_tab(:,3)==1e-12); idx1=max(idx); idx(idx1)=[]; EbNo_tab(idx,:)=[]; time=BERtrials_constantCI(1,conf_int,conf_per)/512000; %Now to test according to the time specs listed above clear l; tot_BER=zeros(3,2); for l=1:1:3 Set_aten(Aten_serial,EbNo_tab(l,1)) pause(15) tic a=BER_monitor(SDM300_serial) x=toc; clear first first=[a x];

21

while x<=time a=BER_monitor(SDM300_serial); x=toc; first=vertcat(first,[a x]); time_left=time-x end tot_BER(l,1)=sum(first(:,1))/length(first); tot_BER(l,2)=EbNo_monitor(SDM300_serial); tot_BER end %Now I should have matrix that compares the BER over a given time vs EbNo %now to graph the results fig1=figure(1); a=semilogy(tot_BER(:,2), tot_BER(:,1)) hold on xlabel('Eb/No (dB)') title('Eb/No vs BER performance curve') ylabel('BER') SDM300_data=[4,1e-9;3.5,1e-8;3,1e-6]; b=semilogy(SDM300_data(:,1), SDM300_data(:,2)); set(b,'Color', 'red') handl=[a,b]; legend(handl, 'BER generated', 'BER expected (expected value)') hold off %Now include a table of contents %Now to graph a difference between the two to see the error fig2=figure(2); c=interp1q(SDM300_data(:,1),SDM300_data(:,2),tot_BER(:,2)); if isnan(c(3,1))==1 c(3,1)=1e-6; end d=abs(c-SDM300_data(:,2)); f=semilogy(tot_BER(:,2), d); xlabel('Eb/No (dB)') title('error between expected and generated') ylabel('delta between the expected and generated BER')

22

Você também pode gostar