Later Markov chain models came into existence and showed the world the difference in communication with the help of their new model inventions These models were much more reliable and feasible then the old techniques when used I will be using only two of his models from the list, Fritchmans model and Gilbert model (1) In wireless communication, problem of packet losses is the most common example to be stated These two models are widely used in wireless communication It has been proved that burst errors can be modeled using a random process, which is basically the foundation of the Markovchain models Both of the selected models for this project follow the same technique Another example, where Markov chain models are widely being used is digital video broadcasting, which is surrounded by various telecommunication challenges, like as to achieve a highest data rate instantly in a wireless network, to implement a power-limited mobile receiver, etc Under these circumstances Markov chain models are required as to model the fading channels Trace analysis is done for the modeling of the fading channels, which in turn are also facing some problems (4) (5) Gilbert firstly proposed an idea of modeling for these types of errors in a communication channel (1) The model consists of two types of states, amongst the two one is error free and the other generates errors The model produces a simple geometric distribution of the errors and these errors are reproduced as well Fritchmans model comprises of infinite number of states with the transition probabilities It is the advanced version of the Gilberts model and also works on the same principles as Gilberts model did States can be either error generating or error free However ,this model has more than one error free state and only one error generating state This model is widely used due to its simplicity and is more realistic than Gilberts model (6) Use of interleaving is necessary in this project Without interleaving, dealing with burst errors is extremely difficult As the presence of burst errors in a code word will avoid the data to be decoded correctly at the receiver end So the data is interleaved before being transmitted, to make sure that the data is perfectly decoded at the end With the help of interleaving, bits are arranged in such a way that the affect of burst error is decreased Detailed information is provided on this topic later in this thesis Amongst all the error correcting codes, repetition code is the selected one to be discussed here They repeat the number of bits in the communication channel in order to achieve error free transmission by reducing the errors The working of these codes and their results are thoroughly discussed below In this project I took only three error Free states and one error-generating state for the Fritchmans model Both of these models have an advantage of being easily controlled and are frequently used because of their simplicity (7) Three tasks were being performed in this project The first task covers the basics of the project as it defines and explains software Matlab and also covers the basics of theory involved However, the second task covers the Gilberts model completely The third and final task is based on the real time estimation of the states for the Fritchmans model For last two task results have to be compared for different situations, like using the models with and without interleaving with different probabilities of the states (error generating and error free) plots (8)A Gantt chart was initially made for the project, in order to plan out the things to be done The Gantt chart is provided in Fig1 0 of the appendix All things were planned according to it Another chart showing the weightage of all the tasks and other objectives of the project is also provided in Fig 1 1 of appendix Objectives: To understand the reasons for the generation of burst errors and their effects on a signal To learn how to deal with them in Matlab To use the two of the Markov chain models, Gilbert model and Fritch man model, in order to investigate the changes when performing error estimation for a given code To learn how to write algorithm of these models in Matlab To find out the effects on the probability of error , when probabilities of the states(error generating and error free) of these two Markov chain models are changed To plot the graphs in order to show the comparison for different probabilities of the states (error generating and error free) To understand and use interleaving, de_interleaving To compare the results ,with and without interleaving Three tasks to be completed First one: deals with simple error estimation and teaches how to use Matlab for this project Second task revolves around Gilberts model To find out why and how it should be used with different probabilities of its states (error generating and error free) To find out the importance of interleaving in Gilbert model To plot graphs showing the results clearly of with and without interleaving with different probabilities of the states Third task is about Fritchmans model To learn how and why it should be used To learn how to deal with more number of states To learn how to perform real time estimations in Matlab To change the probabilities of the states and calculate the probability of error To perform the above objective again, with and without interleavingLiterature Review: This section covers the theory of the project It includes explanation of all the techniques and methods involved in this project Detail of every single communication term used, is provided Burst Errors:In telecommunications a burst error is a sequence of errors which occur adjacent to each other (in a sequence) in the data If the first and the last bit of the data when transmitted are in error, the data when transmitted will be received correctly at the receiving end with no errors, as the errors are not adjacent to each other With the burst error in the data the output is totally changed and the receiving end has errors The length of the burst errors in a block of a data is defined as the number of bits from the first bit error to the last, inclusive (9)Burst errors come in bursts (such as bunches) If they are present in a code, they will wipe out a series of bits which are adjacent to each other A daily life example is, if there is scratch on a CD, CD player might not read the scratched part of the CD due to the presence of the burst errors in that section Error correcting codes are being used in order to remove these types of errors Whereas , repetition codes are being used over here this time to remove the errors with the help of interleaving and de-interleaving where as Markov chain models are also being used over here Which are discussed later (10) Burst errors are highly correlated, if one bit is in error it will cause the neighboring bits to be corrupted as well Like this the whole data is being corrupted and the exact signal is not received at the receiving end Let me explain you more clearly Let us suppose that there is a burst error of size 7 present and it starts from the bit position 30 The bits 30 and 36 are only corrupted and the rest maybe or may not be corrupted Errors at 31 and 36 might cause the neighboring bits to be corrupted and hence, can form a long chain of errors And if the transmitted and received data are exactly the same, it would then show that there was no burst error chain in the data transmitted or receivedBurst errors can form only small chains; they cannot form long chains like if an error is present at bit position 100 and the last error is at bit position 200 It would not affect the bits to that extent that it can form a chain (11) Repetition Code:Where ever cost and difficulties in encoding and decoding are the problem, repetition code is used They are used for the occurrence of fading and noise in a signal during transmission There are several examples where these codes are being used like in packet headers, infra red communications, rate matching in cellular systems and in transmit time delay These codes also have the ability to get time domain diversity in fast fading channels, whereas they are also used with other error correcting codes These codes are not suitable for Gaussian channels As they give zero coding gain with soft-decision coding and negative coding gain with hard-decision coding (12)These codes repeat the number of bits in order to provide protection against the errors in the communication channel We encode the multiple numbers of bits with the same information I am using 3-bit repetition code in this project; the code uses three bits to encode a single bit, as shown below:a) Input = 1, Codeword (output) = 111;b) Input = 0, Codeword=000 If 0 is the first bit, after being encoded by repetition code, it now becomes 000 as shown in (b) and same for 1after being encoded it becomes 111 At the receiver end we decode the bits and I am following the same technique in this project I have used majority rule for last two tasks If all bits are 0, a bit is decoded as 0 and if the second triplet has two zeros, it is decoded as 0, in order to correct the error (13)Code rate is to be considered while dealing with this error correcting code Code rate is defined as the ratio of number of original bits to the number of bits required to encode the original bitc)Where k = original bitAnd n = number of bits required to encode the original bitBenefit of 3-bit repetition code is that, it provides protection against a single error When this code is transmitted through the channel, the receiver will then decide that whether the received bit is 0 or 1 by looking at the output bits of the channel With the help of these codes probability of error is decreased a lot .
elephone g3
After encoding is done, hamming distance of the elephone g3 bit increases because of which error detection and correction becomes much easier Hamming distance between two code words C and C1 is the total number of bits in which the two differ For example if we take a (7, 4) encoder with the rate of 4/7, the hamming distance between two codes will be defined as:The greater the hamming distance between the code words, the easier it will be to correct the code The job of the decoder is to select the legal codeword which mostly resembles the sequence of the received bits This will be proved later in the thesis with results (14)Interleaving and De-Interleaving:Interleaving in communication systems provides protection against errors, like burst errors which are long chain of errors If there is a burst error present in a code word, too many errors can be generated in one code word, and then that code word cannot be correctly decoded In that case, Interleaving is to be performed In interleaving bits are rearranged before being transmitted, by this method if there are burst errors all the bits, when re-arranged break the chain of the burst error and separates the errors from each other as they are spread in the codeword These errors now when not in a chain can easily be corrected with the help of error correcting codes As now it will be easier for the error correcting codes to find the error and to correct it By this simple and cheap method, problem like burst errors is handled Interleaving is suitable for any type of error correcting code, as its function remains the same Following in figure 1 2, method of rearranging the bits is shown for a normal interleaver for a code word Fig 1 2 Interleaving of a code word (15)For example, if we take a code word and put it into blocks of four:11110011110110000 If we transmit this code word through the channel and receive the exact code at the receiver, then we are sure that there are no errors of any type present and if the data when received is decoded into a wrong code word then that shows that there is a burst error present in the code word as some of the bits were just wiped-out In that case interleaving is to be performed Below is shown how a burst error wipes out the data from a code word 1111_____110110000If we perform interleaving, the code word would be like:11010110010111010As you can see that the bits are rearranged completely If we transmit this data and still get a burst error it will be received as shown below;1101 ____010111010Where as after de-interleaving code word would be like11_10_11__0110_00Now error correcting codes like repetition codes is required to code the code word properly Burst errors are received in a code word by an encoding/de-interleaving method known as de-interleaving (16)There are different types of interleavers; I have used row column interleaving in my project, as it is easier to use Before selecting this interleaver I have thoroughly researched and wrote algorithms for other interleavers like helical, odd-even, RC symmetrical In appendix from code 1 to 7(page65) these algorithms are provided I have also written a Matlab algorithm for row column interleaving, it is also provided in the appendix These algorithms work for any type and size of a picture and perform interleaving and de-interleaving on its number of bits All the algorithms are successfully working By writing these codes I got good understanding about Interleavers and how they all work As all the algorithms are working perfectly, I gained more confidence in my project for writing algorithms in Matlab This practice made me much more familiar with the software Markovs Chain:To deal with wireless networks, one should be familiar with and have knowledge about packet loss in a packet switched system and should be able to identify and provide solutions to these kinds of problems With the increasing complexity in wireless networks, it became necessary to have a model to model the error distribution In order to solve this problem different models of Markovs chain are being developed to model the error distribution In Markov chain models error or loss of a packet is considered to be dependent on the previous transmission of the bit or packet One of the Markov chain models, the Gilberts model was designed by Elliot and is known to be the simplest model used in a communication system (22)It contains only two states, error generating state and error free state Its advanced version, Fritchmans model is more efficient in dealing with the errors and is also widely used The model consists of k number of error free states and K-n number oferror generating state In my project I have covered these two models in detail, firstly, by explaining the actual working of these models and secondly, proving their efficiency by providing detailed results (17)Gilbert Model:Gilbert model is a very simple model; it is the least complicated model and is widely used in communication systems A communication channel which has burst errors shows that they belong to a class of memory models As there are several errors like switching transients, multipath fading which are bursty in nature This model is used in various fields of communications, for example in ATM communications and telephone circuits (18)Bit error models are required to generate noise bits They are classified in to two sections, memory less models and memory models In memory less models noise bits are generated by a series of independent trials Where each trial has a probability of P (0) of producing an error free bit and a probability of P (1) =1-P (0) of producing an error bit Gilbert model is very simple and is widely used in communication systems It also deals with the burst errors successfully Gilbert model was being presented by Gilbert in 1960 The model consists of two states, error generating state and error free state Following is the state diagram of the Gilbert model showing two states and their respective probabilities:Fig 1 3(19)When a zero bit is received it goes to the good state which is the error free state and if bit one is received it goes to the bad state also known as error generating state Probability of a bit staying in good state is Pgg and for the bad state is Pbb= 1- Pgg These both probabilities Pgg and Pbb add up to one Whereas, other probabilities are Pgb and Pbg Pgb is the probability of a bit going from good state to the bad state and Pbg is the probability of a bit going from bad state to the good state All these probabilities are clearly shown in fig 1 3 We cannot reconstruct the order of the states form the order of bits in the error process as both of the bits 0 and 1 are being produced in the bad (error generating state) After sending a codeword, a burst can be found in any of the two ways: either a bit can stay in the good state or it can go to the bad state producing a one From bad state it can go back to the good state only if it is corrected A bit can also stay at the bad state for a very long time showing that it has a burst error Error generating gaps can occur in any of the two states This two state Markov model doesnt repeat the same error burst length for a particular burst error pattern A single exponential can be used to describe the error distribution in the single error state model Following is the graph showing the use of log to make it a linear function Fig 1 4In Fig1 4, you can see that there is not enough information provided for describing the real behavior of error in a selected channel Gilbert models have an advantage of being treated according to the environment If bad fading is the requirement, bit will stay in the bad state for a long time All the probabilities are adjustable and that is why they are suitable for any kind of environment (20)Fritchmans Model:In 1967, Fritchman represented his model with N number of states The model was firstly used in telephone circuits and HF troposheris/ionospheric wireless links between the stations (13)The model was developed to represent the error distribution in a communication channel, specially the fading channels, for example mobile radio channels The model is best to describe the error distribution in these types of channels With the increasing difficulties in calculating probability of error in a communication channel, this model is used to overcome this problem with accuracy and simplicity A different approach is being followed by the model, that is to model the error bit values in the states, so that the error bits are directly determined from the states The N state Fritchmans model is comprises of two groups A and B .
buy elephone g3
Group A consists of K number of error free states buy elephone g3 and group B consists of N-K number of error generating states (11)Below is the figure describing the number of states in a Fritchmansmodel Fig 1 5(21)The development of error is shown in the error state at every instant when they are produced If the bit of the sequence is in error it belongs to the error generating state and if the bit is not in error, it goes to the error free state 1 indicates error and 0 indicates the error free bit In this project I am using this model with three error free states and one error generating state So the only possible transitions are shown below, where B1 is the error generating state andG1,G2and G3 are the error free states:1-From state G1 to B1, or it can stay in G12-From state G2 to B1, or it can stay in G23-From state G3 to B1, or it can stay in G34-From state B1 to G1, G2 or G3 As shown in Fig 1 6Fig 1 6showing the all the possible transitions for the states Probability Pg1, Pg2and Pg3 represent the probability of error free bits in states G1, G2 and G3 Where as; Pb1 represents the probability of error bits in the B1 state Probabilities Pb1,1,Pb1,2,Pb1,3 are the probabilities of bits going from B1 state to the states G1,G2 and G3 and probabilities P1,1,P2,1,P3,1 are the probabilities of bits going from states G1,G2 andG3 to the state B1 From the above Fritchmans model shown in Figure??? We can classify the two groups A and B with all the error free states in group A and the rest in B Bits O and 1s are allocated to the groups as shown below: In order to find the probability of an error free bit in this model we use the following formula:Values a and Pi are the parameters of this model We can find the transition probabilities by experimental measurements We dont need to calculate the burst length for the Fritchmans model with only one error generating state because the error generation process will be defined by the transition probabilities of the error free states Task 1:Introduction:Burst errors, being the major problem in communication systems is the main objective of this task Probability of error in a given set of data will be determined SNR (Signal to Noise Ratio) is calculated in order to show the number of errors in a signal Graphs are plotted for probability of errors against SNR These graphs are then used to compare the results Using Matlab for the first time was a big challenge for me, but with the help of my previous programming skills and aid provided by the software itself made the task easy Dealing with errors in a signal, require the study of theory for these topics In order to code this task in proper steps a flow chart is made, which is discussed in the next section Methodology:Simulation of this task involved the use of new Matlab functions, which were found with the help of the software itself Algorithm for this task is provided in the appendix Below is the flow chart, showing the steps followed in this task:Fig 2 0 Firstly, numbers of samples in the data were selected Value of SNR was calculated with the help of the following codes:Sigma (i) = 0 5 (0 05*i); SNR (i) =-20*log10 (sigma (i))Next step was to generate random numbers from zero to the calculated sigma value With the use of for loops and if and else statements, limits were set for the bits to be in error or to be error free Following statement was used to generate random numbers N=normrnd(0,sigma(i));Counters were used to note the change in the bit whenever it was in error Probability of error was found by dividing the counters by total number of samples, as shown below:Pe (i) =counter/nos;To obtain more accurate results, other formulas for calculating probability of error were used Error function was used in these formulas and these formulas gave more accurate results Following are the formulas being used:Pe_exact (i) =1/2*erfc (1/ (sigma (i)*sqrt (2))); Sgm (i) = ((Pe (i) Pe (i) ^2)/nos); Relativerror (i) = (sqrt (sgm (i)))/Pe (i);After calculating all the values, graph was plotted for SNR against Pe (probability of error) and SNR against Pe_exact (new probability of error) Below is the graph showing the results:Fig 2 1From the above graph, it is visible that SNR and Pe/Pe_exact (probabilities of error) exhibit an inversely proportional relationship to each other A decrease in the SNR value results in a corresponding increase in the values of Pe and Pe_exact As the number of errors increase in the signal, its SNR declines and the signal at the receivers end will not be decoded properly A number of graphs were obtained during the debugging of the program, but the above graph was selected as it shows a much clearer result than the others Conclusion:The task provides a more lucid understanding of the errors in a signal and their effects We can generalize the result to state that the SNR should be very high for a good transmission of the data across the communication channel In this exercise, Matlab proved to be very user-friendly and appropriate software for performing this task To conclude, the expected results were achieved and the task was completed successfully Task 2:Introduction:This task required the simulation of Gilbert model in Matlab while dealing with burst errors, with and without interleaving and de-interleaving Use of this model was selected because of its real-life implementation in the field of communication when dealing with burst errors The algorithm written for this task is contained in the appendix The primary purpose of this task was to identify the advantages of interleaving and de-interleaving Following were the main objectives of this task, for which the algorithm was developed in order to achieve the desired results:: data to be generated repetition code interleaving and de-interleaving, generating Gilbert model allocating probabilities to the two states of the model(error free and error generating) calculating the probability of burst errors A thorough study of the above methods was required before writing the algorithm After a thorough reading of journal articles and books available on the internet and library, a flow chart as shown in Fig3 0, was made in order to plan the course of action to generate accurate results Methodology:This task involved a number of steps to be followed All the steps in a correct sequence are shown in the flow chart below:Figure 3 0The flow chart provided in Figure 3 .
elephone g3 black
0 was elephone g3 black used for this task The first step is to select the initial number of bits in the data on which repetition code is to be used As explained earlier in the literature review, in this type of coding every single bit is being coded with two other bits In this case 27 bits were selected initially and after being coded they were equal to 81 After selecting the number of bits, the next step is to generate 27 random numbers between -1 to 1 and then sign them to 1 or -1 These were signed as we cannot use decimal values or other integers for the rest of the steps Following are the codes for generating and changing the values to 1 and -1:Z=unifrnd(1,1,1,27);Data=sign(z1);The next step involves the use of repetition code Every single bit is coded with two other bits and the total size of the data is increased to 81 bits Below is the code showing the method:y=data(i) * [1 1 1]; coded=[coded y];Interleaving this data is the next step First the above new data with 81 bits is reshaped to a matrix of dimensions 9*9 We do this because we are using row column interleaving After reshaping, we transpose the matrix in order to change the positions of the bits If there is any long chain of error in the data (i e a burst error), it is broken down by through interleaving In order to transmit the data through the channel it should be in a row Matrix was reshaped with all the bits in one row and sent through the channel x=reshape(X, 9, 9);t1=xXX=reshape (t1, 1, 81);Channel included Gilbert model, data was transmitted through it and the bits were allocated to their states according to their probability of occurrence in the data Different probabilities were set for the two states, Good (error free) and Bad (error generating) Limits were set for the two states with the help of for loops and if and else statements In this code, 1 represents the good state and 0 represents the bad state As shown in figure 1 3(page 20) Every time the program was debugged, bit jumped from state to state But probability was monitored only when it was in the bad state Whenever the bit was in a bad state it was compared with the original data and was then inverted Following code was used for this purpose:XX (coded) = -XX (coded)Where, XX is the interleaved data and negative sign on the right side of the equation is used to invert the error bit With the help of this code, use of interleaving is quite obvious as the error probability is reduced to minimum when compared to the calculated error probability without interleaving Next step was to perform de-interleaving Same steps which were used for interleaving were followed and the data was then sent to the receiver Always the data is transmitted through the channel in a row; therefore the bit matrix was reshaped into a matrix of one row At the receiver end decoding was performed on the received code Majority rule was used in the next step, in which three bits were added at the same time If the first two bits are -1 and last one is 1, majority rule will give answer equal to -3 At the receiver end decoding was being done on the received code Numbers of errors were found by subtracting the number of bits received after decoding from the original data Finally error probability was calculated, by dividing the number of error probability by thetotal number of bits and number of times program debuggedPlots were plotted for the error probabilities of the two states, with and without interleaving Below is the graph obtained for the probability of the bit in the bad state (pbb) against probability of error (pe):Fig 3 1Value of pbb was fixed and the program was debugged100 times for loop was used to find the probability of error in the algorithm Average of all the probabilities was taken for the same state probability, as this method increased the accuracy of the result Probability of the bad state was changed and again the algorithm was debugged 100 times This process was followed for rest of the probabilities of the bad state and a curve on the above graph was obtained for both the cases Its quite visible from the graph in fig3 1, that interleaving really helps a communication channel to achieve a transmission with a minimum error rate In the graph provided in fig 3 1, interleaving plays a major role As you can see that the error probability is reduced to minimum when compared to the calculated error probability without interleaving, which is very high With interleaving, error probability is very small and without interleaving error probability is really high Without interleaving, error probability is really high because the error remains in the data and cannot be removed by any means With interleaving, the error rate is decreased as the interleaver rearranges the bits of the data in a different order which when compared to the error bit is then used to invert the error bit It breaks the long chain of burst errors That is why there is a huge difference between the two curves A relationship between error probability and probability of the bad state can be seen from the above graph, they are directly proportional to each other There is an increase in the value of error probability corresponding to the value of the probability of bad state Next graph for the probability of the good states against the probability of error was plotted Fig 3 2In order to obtain the plot shown in Fig3 2 , we considered only good state this time and found the corresponding probability of errors(pe) occurring in the channel Again same procedure was followed for calculating the probability of error (pe) values The code was debugged 100 times with the same probability of the good state (pgg) and average of the values of probability of errors was taken When the bit is in good state, it means that it is error free, but still there are chances that a bit can go to the bad state from there In that case we again perform interleaving for the check and if there is any error it will be corrected That is why the probability of error (pe) values is very low as their chance of occurrence is not very high Without interleaving, error can occur and if not corrected it will exist in the data, giving a rise higher value of probability of error (pe)Fig 3 3Fig 3 3 shows a clear and wide difference between the probability of error values for one probability of state value which is 0 5 Without interleaving, it is quite clear that the error rate is linearly increasing and it is very high at the point when the probability of state value is 0 5 With interleaving, the line is very low near to the x axis showing the presence of interleaving in that case A special case, Pbb of 0 5 is selected for the state probability, as there are equal chances for the bit to be in either of the two states as shown in the graph, when the plot for the good state was obtained having a state probability of 0 5 The plot is shown in fig 3 4 You can see that there is no difference between fig 3 3 and 3 4 because they both have the same probability of the bit to be in error or not in error Fig 3 4The above linear graph is used to show the probability values without interleaving Error rate is increasing with the increase in the probability of the state (pgg) value Conclusion:From the above graphs, you can clearly see the advantages of interleaving in communication systems It decreases the error probability in a data to the maximum extent possible and results in an error free transmission Whenever the bit is in the bad state, it needs to be corrected otherwise it wont be correctly decoded at the receiver end Techniques used for removing the errors were successful in achieving the result All the plots obtained are the result of successful coding and planning of steps in order to be taken Every single plot shows the benefits of interleaving as discussed earlier Gilbert model is covered in detail and is being practically used The task took me two months to complete Initially I had a few problems but after a thorough study of the topic I mastered myself in the above applications Task 3:Introduction:The final task of this project revolves around Fritchmans model All the objectives of task two are the same for this task; the only difference is that the Gilbert model is replaced by the Fritchmans model in this task Fritchmans model with three error free state (good states) and one error generating state (bad state) was selected All the possible transitions of the bits in these states of the model are shown in Fig 1 6 This task was really very difficult to be performed, although it was the continuation of the previous task, methods and objectives were almost same The task is based on real time estimation as the bits are moving from state to state in the Fritchmans model .
elephone g3 white
All the steps are to be properly followed in order to get the required results elephone g3 Methodology:In order to plan out the steps and methods to be followed, a flow chart was made All the steps are almost the same as used in task2 Fig 4 0Size of the bits was increased from 27 to 363 so that a greater data is generated and more distribution of bits among the set probabilities of the states is observed Repetition code was then used, which increased the size of the data to 1089 bits Row column interleaving was done, this time it was done in more detail In task 2, we performed a very simple interleaving which concluded in 4 lines of the codes, but in this task a more detailed code for this interleaver was written in order to avoid any errors in writing the code Three error free (good) states and one error generating (bad) state was used We defined all the states by allocating them the probabilities Probabilities for the bad state were kept very low, as there was a very long chain of burst errors generated when a very high probability for this state was used Therefore, only the bad state was taken into account Counters were used for all the states, so that it is observed that the bit is in a particular state Use of counters made it easy to determine the bit in any of the states Whenever the bit is in the bad state following code was used to invert the error bit Intrlv=-1*intrlv;Intrlv represents the interleaved data and negative sign on the left side was used to invert the bit This code was repeatedly used when the bit was in error If the bit was not in error the negative sign from the above equation was removed because when the bit is not in error and is in any of the three good states it does not need to be inverted De-interleaving was then performed, and data was checked if it was properly de-interleaved or not Majority rule was again used, as for the same purpose as explained in task 2 Data was then decoded and was subtracted from the original data in order to find the error bits The difference showed the number of bits in error which were then used to calculate the probability of error Graph was then plotted which is shown below:Fig 4 1The graph shown in Fig 4 1 clearly shows the difference in the probability of error curves for the two conditions, with and without interleaving For Fritchmans model, probability of the bad state was kept very low as you can see it on the x axis because if the probability value was increased burst length was increased to a lot extent and for the clear result these values were small for dealing with small chain of burst errors As the value of probability of bad state increases, probability of the bit to be in error also increases When interleaving is performed, probability of error is quite low Without interleaving the error bit is not corrected and it stays in the bad state cause an increase in the value of probability of error Interleaving again showed its importance in the field of communications