Summary of Classical Cryptography

Summary of Classical Cryptography

With the emergence and development of computers, information security has gradually
become a concern of people. In these courses, I learned about the framework of cryptography learning, and learned about classical cryptography. Now I will make a summary of this part of the learning content.

In ancient times, people already have the consciousness of protecting information
security. A series of classical cryptography have gradually emerged. In this chapter, I have learned several classical cryptography.

The core of classical cryptography I’ve learned is substitution and permutation.

The Caesar cryptography was originally created. This set of cryptography is very simple. When encrypting, the plaintext is moved to the right in alphabetical order to get the ciphertext. When decrypting, the corresponding digits are moved to the left. This
set of cryptography hardly has the ability of confidentiality and can be easily attacked by enumeration attacks.The process of Caesar cryptography is as follows:

C=Ek(M)=M+k( mod 26 )

M=Dk©=C-k(mod 26 )

Later,on the basis of Caesar cryptography, a single-table replacement cryptography
was derived, which replaced the original alphabetical order with a new sequence. This encryption method is slightly more secure than Caesar cryptography, but it is still easy to be enumerated attacks and statistical analysis to decipher.

Simple form replacement can not meet people’s security needs. Subsequently,
multiplicative encryption and affine encryption are generated, and the concept
of inverse element is introduced. The process of affine encryption is as follows:

C=Ek§=k1*P+k2( mod 26 )

P=Dk©=(C-k2)*k1-1(mod 26 )

This encryption algorithm has improved its security ability, but it has been decoded
by statistical analysis. Since the frequency of each letter in natural language exists regularly, and affine encryption does not eliminate this regularity, the adversary can attack accordingly. The frequency of letters in natural languages ranges from high to low and is divided into the following
five levels:

  1. e

  2. t,a,o,I,n,s,h,r

  3. d,l

  4. c,u,m,w,f,g,y,p,b

  5. v.k,j,z,q

Although the letters in ciphertext will not be arranged according to the above rules,
the statistical rules of the replaced letters still exist.

In order to eliminate the statistical rule, we need to flat the histogram. We have two ideas. The first is to reduce redundancy. The second is multi-table replacement encryption. The so-called multi-table substitution, as its name implies, is to encrypt the plaintext with multiple mapping tables, which are used periodically to encrypt plaintext. In this way, although there is a certain role in eliminating the law of letter frequency, the enemy can break up the ciphertext by enumerating the period, and transform the problem of multi-table replacement encryption into that of single-table replacement encryption. The statistical law will appear again, so that the enemy can decipher the ciphertext and get the plaintext.

For quantitative analysis, the concept of Index of coincidence is proposed. I searched for relevant information and learned about the content of Index of coincidence. In
cryptography, coincidence counting is the technique (invented by William F.Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text.

The calculation method of IC is given below.

在这里插入图片描述

Where c is the normalizing coefficient (26 for English), na is the number of times the letter “a” appears in the text, and N is the length of the text.

在这里插入图片描述

where N is the length of the text and n1 through nc are the frequencies (as integers) of the c letters of the alphabet (c = 26 for monocase English). The sum of the ni is necessarily N.

In addition, the teacher mentioned Vigenre cipher. I did some understanding after
class.

The idea behind the Vigenère cipher, like all other polyalphabetic ciphers, is to
disguise the plaintext letter frequency to interfere with a straightforward application of frequency analysis. For instance, if P is the most frequent letter in a ciphertext whose plaintext is in English, one might suspect that P corresponds to E since E is the most frequently used letter in English.However, by using the Vigenère cipher, E can be enciphered as different ciphertext letters at different points in the message, which defeats simple frequency analysis.

The primary weakness of the Vigenère cipher is the repeating nature of its key. If
a cryptanalyst correctly guesses the key’s length, the cipher text can be treated as interwoven Caesar ciphers, which can easily be broken individually.The Kasiski examination and Friedman test can help to determine the key length .

So we have to mention Kasiski examination & Friedman test. In 1863, Friedrich Kasiski was the first to publish a successful general attack on the Vigenère cipher.Earlier attacks relied on knowledge of the plaintext or the use of a recognizable word as a key. Kasiski’s method had no such dependencies. The Kasiski examination, also called the Kasiski test, takes advantage of the fact that repeated words are, by chance, sometimes encrypted using the same key letters, leading to repeated groups in the ciphertext.

The Friedman test (sometimes known as the kappa test) was invented during the 1920s
by William F. Friedman, who used the index of coincidence, which measures the unevenness of the cipher letter frequencies to break the cipher. By knowing the probability kp that any two randomly chosen source language letters are the same (around 0.067 for monocase English) and the probability of a coincidence for a uniform random selection from the alphabet kr (1/26= 0.0385 for English), the key length can be estimated as the following: 在这里插入图片描述from the observed coincidence rate在这里插入图片描述 in
which c is the size of the alphabet (26 for English), N is the length of the text and n1 to nc are the observed ciphertext letter frequencies, as integers.

That is, however, only an approximation ; its accuracy increases with the size of the text. It would, in practice, be necessary to try various key lengths that are close to the estimate. A better approach for repeating-key ciphers is to copy the ciphertext into rows of a matrix with as many columns as an assumed key length and then to compute the average index of coincidence with each column considered separately. When that is done for each possible key length,the highest average I.C. then corresponds to the most-likely key length. Such tests may be supplemented by information from the Kasiski examination.

To sum up, classical cryptography is not secure, it is easy to be enumerated attacks and statistical analysis. But its existence provides a foundation for future generations to study cryptography and has indelible significance.

Supplementary narration:

一、The Method of Finding Inverse Element

1 Fermat’s small theorem:ap-1=1( mod p )

It can be deduced from Fermat’s small theorem that a-1=ap-2( mod p )

2 extended euclidean algorithm

aa-1=1( mod b ) equivalent to aa-1+b*k=1

expend:ax+by=c(where c% GCD(a, b) = 0)

I use C++ to write the following program to express:

void exgcd(int a,int b,int c,int &x,&y){
if(a==0){
x=0;
y=c/b;
return;
}else{
int tx,ty;
exgcd(b%a,a,c,tx,ty);
x=ty-(b/a)*tx;
y=tx;
return;
}
}

3 Linear algorithm:

A[i]=-(p/i)*A[p%i]

By recursively calculating the inverse of a single number according to the formula, the time complexity will be reduced to O(log2P).

二、Another multi-table replacement

I learnt another way to replace multiple tables by looking through books. It changes the mapping table into affine transformation. Firstly, plaintext M is divided
into groups M1, M2,… And Mj, which are composed of n letters. The encryption of each group Mi is as follows:

Ci=A*Mi+B( mod N ),i=1,2,···,j

Where (A, B) is the key, A isthe invertible matrix of n * n, satisfying GCD (| A |, N) = 1.

The decryption process ofciphertext packet Ci is as follows:

Mi=A-1*(Ci-B)( mod N ),i=1,2,···,j

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
方向估计是在信号处理中的一个重要问题,其目的是确定从不同方向传入的信号源。经典的方向估计方法包括波束形成、最大似然估计和最小二乘估计等,而现代的方向估计方法则包括了高分辨率谱估计和基于子空间方法等。 波束形成是一种经典的方向估计方法,它利用了阵列天线的结构特性,通过调整不同天线的相位和幅度权重来形成一个波束,以达到对不同方向信号源的波束聚集。这种方法通常需要大规模的天线阵列,并且对信号源的数量和方向有一定的限制。 最大似然估计是另一种经典的方向估计方法,它基于统计学原理,通过对接收信号的统计特性进行概率建模,从而推测信号源的方向。最大似然估计在理论上是最优的,但在实践中通常需要知道较多的信号源信息,并且对噪声的统计特性有一定的要求。 最小二乘估计是一种基于数学优化的经典方向估计方法,它通过最小化接收信号与估计信号方向的残差平方和来估计信号源的方向。这种方法对噪声的统计特性要求相对较低,适用于多种应用场景。 高分辨率谱估计是现代方向估计方法中的一种,它基于接收信号的统计特性,通过提取信号的频谱信息来估计信号源的方向。这种方法可以达到很高的方向分辨能力,但需要较长的样本序列,并且对信号源的数量和方向有一定的限制。 基于子空间的方向估计方法是现代方向估计中的另一种重要技术,它利用信号子空间和噪声子空间的性质来估计信号源的方向。这种方法适用于低信噪比环境下的方向估计,并且对信号源的数量和方向没有明显的限制。 综上所述,经典和现代的方向估计方法各有其优势和适用场景。在具体应用中,需要根据实际需求和系统条件选择合适的方向估计方法。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值