堆排序的应用-(POJ)Entropy

描述
An entropy encoder is a data encoding method that achieves lossless data compression by encoding a message with “wasted” or “extra” information removed. In other words, entropy encoding removes information that was not necessary in the first place to accurately encode the message. A high degree of entropy implies a message with a great deal of wasted information; english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, such as JPEG graphics or ZIP archives, have very little entropy and do not benefit from further attempts at entropy encoding.

English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits, eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other letters in english text. If a way could be found to encode just these letters with four bits, then the new encoding would be smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a reason, however: it’s easy, since one is always dealing with a fixed number of bits to represent each possible glyph or character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the four-bit codes and eight-bit codes? This seemingly difficult problem is solved using what is known as a “prefix-free variable-length” encoding.

In such an encoding, any number of bits can be used to represent any glyph, and glyphs not present in the message are simply not encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a decoding would be impossible.

Consider the text “AAAAABCD”. Using ASCII, encoding this would require 64 bits. If, instead, we encode “A” with the bit pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we can encode this text in only 16 bits; the resulting bit pattern would be “0000000000011011”. This is still a fixed-length encoding, however; we’re using two bits per glyph instead of eight. Since the glyph “A” occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we can, but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C” with “110”, and “D” with “111”. (This is clearly not the only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message encodes in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right and you’ll see that the prefix-free encoding makes it simple to decode this into the original text even though the codes have varying bit lengths.

As a second example, consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and the space character both occur with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. The letters “C”, "I’ and “N” only occur once, however, so they will have the longest codes.

There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces with “00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”, “I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message with 8-bit ASCII encoding, a compression ratio of 2.8 to 1.

输入
The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing only the word “END” as the text string. This line should not be processed.

输出
For each text string in the input, output the length in bits of the 8-bit ASCII encoding, the length in bits of an optimal prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.

样例输入
AAAAABCD
THE_CAT_IN_THE_HAT
END

样例输出
64 13 4.9
144 51 2.8

题目大意:
给你一个字符串,由大写字母和下划线(代表空格)组成,根据字母出现的次数为权值利用赫夫曼编码计算出编码的总长度,与传统的每个字符用8位的编码长度对比,计算出两者的比例。

在这里插入图片描述
Code1(手写建堆、堆修复):

#include<iostream>
#include<vector>
#include<map>
#include<algorithm>
#include<numeric>
using namespace std;
int ans=1;
vector<int>a; 
void HeapDown(int k,int size){//构造小顶堆 
	while(2*k<=size){//存在左右孩子 
		int j=2*k;//左孩子 
		if(j<size&&a[j+1]<a[j])//判断右孩子和左孩子哪个小 
			j++;
		if(a[k]<=a[j])//如果父亲比孩子小 退出 
			break;
		else//否则进行交换 
			swap(a[k],a[j]);
		k=j;//继续下沉 
	}
}
void CreateHeap(int num,int k){//建堆 
	a[k]=num;
	int i=k;
	while(1){
		if(i<=1)
			break;
		if(a[i/2]>a[i])//如果父亲比孩子大则交换 小顶堆 堆顶为最小 
			swap(a[i],a[i/2]);//进行交换 
		i=i/2;//继续上浮 
	}
}
int main(){
	string str;
	int L;
	map<char,int>InD;//用来统计每个字符的个数 
	cin>>str;
	while(str!="END"){
		for(int i=0;i<str.length();i++)
			InD[str[i]]++;//利用map元素无重复的特性 统计每个字符的个数 
		a.resize(InD.size()+2);
		ans=0;
		for(auto it : InD){
			CreateHeap(it.second,++ans);
		}
		int L=0;
		while(ans>2){
			int num1=a[1];//最小值1
			swap(a[1],a[ans]);//① 
			HeapDown(1,ans-1);//②  ①+②相当于把堆顶删除掉 
			int num2=a[1];//最小值2 
			swap(a[1],a[ans-1]);
			HeapDown(1,ans-2);
			int sum=num1+num2;
			L+=sum;//求哈夫曼树的权值 
			CreateHeap(sum,ans-1);
			HeapDown(1,ans-1);
			ans--;//长度减一  在sum赋值是把第二小的位置已覆盖 
		}
		L+=a[1]+a[2];
		cout<<str.length()*8<<" "<<L<<" ";
		printf("%.1f\n",((str.length()*8*1.0)/L));
		InD.clear();
		a.clear();
		cin>>str;
	}
	return 0;
}

Code2(利用优先队列):
充点电~:
1、首先要包含头文件#include
2、定义:
priority_queue<Type, Container, Functional>
  Type 就是数据类型
  Container 就是容器类型(Container必须是用数组实现的容器,比如vector,deque等等,但不能用 list。STL里面默认用的是vector)
  Functional 就是比较的方式。
3、
top(): 访问队头元素
empty(): 队列是否为空
size(): 返回队列内元素个数
push(): 插入元素到队尾 (并排序)
emplace(): 原地构造一个元素并插入队列
pop(): 弹出队头元素
swap 交换内容
4、默认是大根堆
//升序队列(小根堆)
priority_queue <int,vector,greater > q;
//降序队列(大根堆)
priority_queue <int,vector,less >q;
5、
优先队列不再遵循先入先出的原则,而是分为两种情况:
①最大优先队列,无论入队顺序,当前最大的元素优先出队。
②最小优先队列,无论入队顺序,当前最小的元素优先出队。

#include<iostream>
#include<queue>
#include<map>
#include<vector>
#include<algorithm>
using namespace std;
int main(){
	priority_queue<int,vector<int>,greater<int>>res;//定义一个优先队列 
	string str;//定义字符串接收输入数据 
	int len,ans=0,sum;
	map<char,int>val;//统计字符个数 
	cin>>str;
	while(str!="END"){
		len=str.length();//字符串的总长度 
		for(int i=0;i<str.length();i++)//统计字符个数 
			val[str[i]]++;
		for(auto it : val)//把每个字符的个数入小顶堆 
			res.push(it.second);
		ans=res.size()==1?res.top():0;//当输入是一个字符的坑点 
		while(res.size()>1){
			int num1=res.top();//取堆顶 取第一小 
			res.pop();//删除堆顶 
			int num2=res.top();//取堆顶 取次小 
			res.pop();//删除堆顶 
			sum=num1+num2;//两个较小的相加之和 
			ans+=(num1+num2);
			res.push(sum);
		}
		cout<<len*8<<" "<<ans<<" ";
		printf("%.1f\n",(len*8*1.0)/ans);
		val.clear();//清空val便于下次存储 
		res.pop();//当res.size()>1的时候循环终止,还存在堆顶 所以需要弹出 进行下一轮的建堆 
		cin>>str;
	}
	return 0;
} 
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值