Binary Packet

Binary Packet

1 Who is this article for?

This article is written for people with an understanding of networking and an intermediate understanding of the language they plan to use along with a strong understanding of bitwise operations. It’s intended to demonstrate a way to form packets in a compact way working on the bit level rather than the byte or string level. In other words, this is for people that need to send a lot of information, like in a game, using as little bandwidth as possible.

2 What is a binary packet?

Simply put, a binary packet is a contiguous array of bytes representing a format used to serialize data. What does this mean exactly? Well it’s a way to pack data tightly together. Lets say you wanted to send a player’s position represented by the mathematical vector of 2 floating point values. Each element in the vector is a single float of 4 bytes so you can store two values in 8 bytes. Moving away from the byte level, writing a boolean would use 1 bit with 0 representing false state and 1 representing the true state.

Implementation wise the binary packet is just a binary writer and reader rolled into one class. The term class is used because object-oriented programming is most commonly used to implement this. In the end an interface is made giving the user access to write and read methods of various data types.

3 Packet format, use cases and considerations

Before going into the implementations the reason for a binary packet needs to be explained. At its core the data being sent over a socket is meaningless. Much like the data in RAM it only has the meaning you give to it, so 32 bits can be seen as an integer, a float, or even 32 boolean values. Therefore in order to use a binary packet you have to decide on a format for your messages.

A format will be proposed and dissected running through common use cases. The format will be of the form:

Packet Length
Event IdentifierData
x N

This format has a header (the prefix of the packet) representing the length in bits. UDP has a concept of packets so the receiver will know the size of the packet they receive in bytes. However, that does not give us the number of bits we used to encode our data so we must include this length ourselves. In TCP there is no concept of a packet and instead they use a stream of data with no clear end. So the number of bits tells us how many bytes make up our pseudo-packet.

The more important reason for a length is to treat each set of messages as a transaction — this article refers to them as packets. Doing this you can bundle things like state updates safely together without the fear that they might be evaluated on different time-steps. As an example, imagine sending 200 state updates from the server and the packet is fragmented. You don’t want to handle half the state updates in one time step and the other half in the next one. Any physics or gameplay interaction could be desynchronized by such an event. That’s why the idea of a packet is used.

Following the header there is a body. The body is a sequence of event identifiers (IDs) and data corresponding to that event ID. But what is an event ID? It’s simply an enumeration (names mapped to a constant integer). In a server client model you have ServerEvents and ClientEvents that hold these values. ServerEvents contains events the server sends to the client. These include things like ping, the login result, and state updates. The ClientEvents contains events the client sends to the server such as pong, login, and input data.

The two event enumerations are stored in a separate library that both the client and server can access.

namespace SocketEvents
{
	public enum ServerEvents
	{
		None,
		Ping,
		LoginResult,
		StateUpdates
	}

	public enum ClientEvents
	{
		None,
		Pong,
		Login,
		Input
	}
}

The data section is optional. For example, ping isn’t required to have any data section. The data section is just extra information for that event ID. The key is that it is in a format. When you’re processing a packet you will be reading event IDs and then handling the data section sequentially.

If this isn’t clear then the following example will help you to understand. Imagine the client sends a packet of the format:

Packet LengthClientEvents.LoginUsernamePassword

The first message when processed will read in ClientEvents.Login and would immediately expect two strings, Username and Password. An error would occur if all that was in the packet was:

Packet LengthClientEvents.Login5

When the program tries to read the username string it would try to interpret the 4 byte integer as a string. If it failed to do that then you’d have a corrupt packet, which when coming from the client is extremely suspicious.

Moving on, you may be wondering how a string is written and read. Just like in allocating a string in memory two common methods arise. Either a terminating character such as a null is appended to the end or the length of the string is prepended before the characters. This will be covered in the section about strings.

To illustrate the bit part of this writer a boolean was quickly mentioned. The true and false values can be represented by a single bit. Computers pad the bits so that they can be stored in RAM at convenient aligned byte positions. A binary packet doesn’t have to do this and can align data to the bit level. For example, say you had the movement input up, down, left, right to serialize. Each state is either true or false representing a key state. So a packet for that input might look like:

Packet LengthClientEvents.InputUpDownLeftRight

It costs only 4 bits for the 4 states. You may be familiar with this concept as bit flags or bit vectors.

Boolean isn’t the only data type that benefits from this packing strategy. Imagine you have a variable on your server that doesn’t change much, such as the max health of a character. Lets say this number is sent to the client or known to be 100. Any value below 100 can be represented with 7 bits. So when sending state changes to health the cost is only 7 bits for the health value. The section on N-Bit integers covers this in more detail.

What if you have a value for an integer that is normally 100, but it might be much larger once in a great while. The concept of a variable-width encoded integer is covered in the section Variable-Width Integer. The idea being you can use less bits to encode smaller numbers and more bits to encode larger numbers. A solid example of this is UTF-8.

Serializing integers is fine, but once in a while you’ll need to send floating point data. Sending 32 bit floats or 64 bit doubles tends to be costly when you need to send a lot of them. Luckily for certain applications you might not need the full precision or you might know the minimum and maximum value. Custom resolution floats solves this problem and is covered in the section Custom Resolution Float.

4 Topics

4.1 Underlying Data Structure

The concept behind a binary packet is simply a long bit vector (sequence of bits). This is probably the most complicated part to get right if you have never used bitwise operators. The class consists of a buffer object which is just a dynamic array of bytes along with an integer bit index that holds the current bit position for writing and reading. Say we need to serialize a bit with the value one into the buffer at the bit index 0.

The first step is to expand the buffer if it’s too small.

while ((bitIndex + bits + 7) / 8 > buffer.Count)
{
	buffer.Add(new byte());
}

Go through a few different bitIndex values to understand how the algorithm works. The “bits” in the equation just represents the number of bits to be written, so a byte would have “bits” equal to eight.

The second step is to copy the bits into the buffer at the bit index. As an example to copy a single set bit the algorithm would look like:

buffer[bitIndex / 8] |= (byte)(1 << (7 - bitIndex % 8));

In this example you are taking 1 (in binary it would look like 00[...]001) and shifting all the bits left 7 – bitIndex modulus 8 (remember modulus has a higher precedence than subtraction normally). So if bitIndex was one you’d be shifting over 6 times and end up with a value on the right side of 64 or 0b01000000. This value is or’ed with the buffer byte setting the second bit in that buffer’s byte.

This example only covered writing a single bit. To handle larger sequences of bits study the code appended to this article. It’s a good idea to attempt doing this part on your own for practice first. Try to write a byte to the buffer given any bit index.

4.2 Strings

Prepending the length and then characters is a good way to handle strings. Strings are probably one of the easiest to understand concepts. Just need to prepend a length then list a bunch of ASCII characters, right? Wrong. In this day and age unicode is used, so supporting it should be on your list of priorities. On the plus side UTF-8 already uses a variable-width encoding. If you aren’t familiar with this concept basically ASCII characters cost 1 byte and other characters cost between 2 and 4 bytes. Most characters can be represented in 2 bytes.

The only way to support unicode in your binary writer/reader is to understand it and implement functions for handling it. Some libraries like .NET have support for it that makes using it trivial, but if you’re not using a .NET language writing an encoder and decoder is still pretty easy. (Or use a library for unicode support). In any case make sure you read and understand this article covering UTF-8.

This idea of unicode doesn’t limit the use of clever tricks. Even though UTF-8 is variable length you can still using tricks to cut the size down especially for ASCII. Huffman coding is the perfect example.

4.3 N-Bit Integer

Sometimes you don’t need to use exactly 8, 16, or 32 bits. Maybe you want to use 24? This is where two’s complement comes into play. For unsigned values you need to find the maximum value you want to store and calculate the bits required. Say you needed to store the values 0 through 3 then you can easily use 2 bits. The interface for this function will be set up with an unsigned integer value and a bits parameter where the user can specify the number of bits to store the value in. Again study the code appended to this article.

4.4 Variable-Width Integer

After understanding UTF-8 this method will seem like a common sense optimization. Let’s say you know the max number you’ll be serializing is less than 31 most of the time, but you want to allow for values that are larger in case they happen. Well integers between 0 and 31 can fit in 5 bits. So in order to accomadate for larger numbers we can write a bool to say if the number was contained in 5 bits. False can mean it was contained in 5 bits and true means it wasn’t. We’ll call these continuation bits. You prepend these to the sequence of bits that contains the value. What if the bit value is true? Obviously with 0 you would stop and you knew the value fit in 5 bits. When the continuation bit is 1 you’ll need another 5 bits and another continuation bit. So to make sure you’re following the value 32 for an unsigned integer with 5 bit intervals is encoded as: 0b10 00001 00000. You’ll notice the least significant bit is on the right. For each extra sequence of bits another continuation bit is used until the sequence is greater than or equal to 32. In the case where the sequence is 32 bits an extra 0 at the end is unneeded since it’s known that the sequence has reached the max bits.

To illustrate this look at the table below with unsigned examples:

ValueBitsBinary (space separates continuation bits from value bit sequence)
28090 100011000 (1 cont. bits + 9 bits = 10 bits)
71200910 010001011000100000 (2 cont. bits + 18 bits = 20 bits)
42949672959111 11111111111111111111111111111111 (3 cont. bits + 32 bits = 35 bits)

This concept is very nice to understand and can be used to save small amounts of bandwidth or large in the case of UTF-8 compared to UTF-32.

4.5 Custom Resolution Float

To understand a custom resolution float let’s start with an example of a long range radar in a game. It might have position data for many different entities. The main concept though is that these positions don’t need full precision. Imagine a 100×100 pixel map and you have 10 types of entities and 100 entities within the radar. Each entity is represented by their floating point position with an x and y coordinate. For the radar example we map entities relative to the player using:

\text{Vector2 position} = (\text{entity.Position} - \text{player.Position}) * \text{scalar} + \text{new Vector2}(50.0, 50.0);

Scalar is the ratio of the real map to the radar in order to scale the positions. Now if all the entities are to be drawn within the radar we can offset them into the 0.0 to 100.0 range as shown. How many bits does it take to serialize the numbers 0.0 to 100.0? We talked about integers, but didn’t cover floats. We can actually choose any arbitrary number of bits. Lets say we use 5 bits. In 5 bits we can represent 0 to 31. Our floating point range can be mapped to these 32 values using:

\text{newValue} = (\text{value} - \text{min}) / (\text{max} - \text{min}) * (2^\text{bits} - 1)

or in our example:

\text{newPositionX} = (\text{position.X} - 0.0) / (100 - 0) * 31

This obviously restricts the possible positions in the radar. Entities next to one another might appear on top of one another in the map due to the lack of precision. Using more bits fixes this problem. At any rate even if you choose 7 bits you’re still not anywhere near 32 bits for each float. To find the precision you simply need to do:

\text{precision} = (\text{max} - \text{min}) / (2^\text{bits} - 1)

In our example we get:

3.22 = (100 - 0) / 31

This means that when plotting a player they will be within 3 pixels of their correct spot on the radar.

So how would this look in a packet:

Packet LengthServerEvents.RadarUpdateEntityCount
Entity TypeXY
x Entity Count

Using the information:

Packet Length32 bits
Event ID7 bits
Entity Count7 bit dynamic integer (the value 100 would cost 8 bits)
Entity Type4 bit integer (10 possible types)
X6 bit custom resolution float
Y6 bit custom resolution float

The bit resolution of 6 bits gives us a precision of 1.6 pixels. The equation to find how many bits that message costs is given by:

32 + 7 + 8 + (4 + 6 + 6) * 100~\text{entities} = 1647~\text{bits} = 206~\text{bytes}

This byte count is small enough to be included in with other updates. If it was over 1400 bytes it might not be as friendly on the network.

There is a special case that was left out. Imagine using the range -100 to 100. The value zero cannot be represented perfectly no matter what bit resolution is used. But what if that range was for velocity and you need to be able to represent zero. This problem is much simpler to solve than it sounds. When the minimum is less than zero and the maximum is greater than zero and the value is zero then the value stored in the bits is zero. If the value is not zero then the value is stored in the range 1 to 2^{\text{bits}} - 2. So for 6 bits we can store 0 to 62 and then we offset that by one so that the range is 1 to 63 and as mentioned zero is reserved for the value zero. Just so this is clear when the minimum and maximum are greater than or equal to zero the other method is used with no special case for zero. The code at the bottom has an implementation of this method.

4.6 Array Base Encoding

This final topic is really just a compression theory overview with limited applications to those that seek maximum bit usage. Imagine you have an array of 16 integers with the value 0 to 2. Now if you wanted to sequentially write these values you can easily use 2 bits for each element since two bits can hold 4 unique values and you have 3. Using 2 bits for each of the 16 elements would be a total of 32 bits. However, there is a more optimal encoding. The range 0 to 2 is 3 unique values which can be viewed as base 3. Now to go from base 3 to base 2 we can treat each element in the array as a digit. A good example of this would be looking at how we optimally encode base 10 in base 2. Each digit goes from 0 to 9 which is a total of 10 values. In the number 123 we have 3 digits. Expanding the tens and hundredths places out we get the equivalent expression which goes from base 10 to base 10:

123 = 1 * 10^2 + 2 * 10^1 + 3 * 10^0

Most people are familiar with this concept when working with hexadecimals which are base 16. Going from 0×123 to base 10 is equivalent to:

291 = 1 * 16^2 + 2 * 16^1 + 3 * 16^0

Writing this out generically you’d end up with this:

value = \sum\limits_{i=0}^{n-1} a[i] * b^i

Luckily for us computers automatically do their calculations in base 2 so there is no need to convert from base 10 to base 2 to retrieve the final bits.

This base 2 value in memory encodes all of the array data with the maximum value of b^n - 1 so the number bits required is \lceil log_2(b^n)\rceil. For our 16 element array of base 3 this results in 26 bits. A savings of 6 bits from the 32 bit version. That might not seem like much, but combining such strategies with others discussed here can greatly reduce bandwidth cost.

To demonstrate this we’ll go back to our radar example discussed in the previous section. Now we said we only had 10 entity types which means we’re working in base 10.

32 + 7 + 8 + \lceil log_2(10^{100})\rceil + (6 + 6) * 100~\text{entities} = 1580~\text{bits} = 198~\text{bytes}

Compared to the original 206 bytes this new encoding saves 8 bytes compressing the entity type data by 83%. The general equation for the savings in bits is given by the equation below:

\lceil log_2(\text{base})\rceil * \text{arraySize} - \lceil log_2(\text{base}^{\text{arraySize}})\rceil

This savings isn’t necessarily free though. The previous example is engineered to show a serious implementation problem. You’ll notice the number of bits required to store the encoding is \lceil log_2(10^{100})\rceil, which is 333 bits. Working with integers and summing numbers that large requires arbitrary-precision arithmetic. In the case of integers, these larger than register numbers are referred to as big integers. Many languages have access to big integer libraries making this problem easy to solve. However, measured out the performance hit is severe. The portable implementation below took 0.095 ms to encode and decode 100 elements. Using C#’s BigInteger class took 0.07 ms which is still dangerously long when one might imagine writing and reading packets for 64 players. Also it should be obvious that this method is pointless if a number is very close to a power of 2. Encoding the range 0 to 15 for instance can use exactly 4 bits. If you were to encode the range 0 to 14 you might as well use 4 bits per element since the savings would only be visible with a large number of array elements. A solution to the performance problem is memoization. That is you would generate a look-up table based on the base and array length to speed up the encoding and decoding operations. For the decoding operation a big cost is just calculating the number of bits the array was encoded in so remembering the number of bits would negate this cost.

Without using a big integer library the following sections cover one way to encode and decode the computed bit sequence.

4.6.1 Encoding

Encoding simply involves calculating the summation given previously:

value = \sum\limits_{i=0}^{n-1} a[i] * b^i

Expanding this into code results in:

BigInteger sum = 0;
BigInteger arrayBasePow = 1;
for (var i = 0; i &lt; array.Length; ++i)
{
    sum += arrayBasePow * array[i];
    arrayBasePow *= arrayBase;
}
int bitCount = (int)Math.Ceiling(BigInteger.Log(arrayBasePow, 2.0));

This gives us both the final sum, which encodes all of the data into a single integer, and the number of bits, by taking the logarithm base 2 of the largest possible summation. Without using a built in big integer class we have to expand the two lines in the loop. First we choose a data structure to encode our big integers into. The easiest to choose is a 64 bit integer array using the low 32 bits to store the the big integer value. The high 32 bits are used to hold onto any carry that is generated after multiplying or adding two numbers.

The first operation is arrayBasePow * array[i] between a big integer and a 32 bit unsigned integer. The big integer is replaced with an array:

var arrayBasePow = new List<ulong>();
arrayBasePow.Add(1);

To perform multiplication you can think of things in base 10. As a reminder, when multiplying 42 and 6 you first multiply the 6 and 2 to get 12. Now 12 can’t fit in base 10 so it overflows and you end up with 2 with a carry of 1. Then you perform 6 multiplied with 4 to get 24 and add in the carry to get 25. The 25 overflows resulting in 5 and you essentially end up doing 6 multiplied by 0 plus the 2 carry resulting in 2 with no carry. So the final result is 252.

Writing this in code turns into:

ulong multiplyCarry = 0;
for (var j = 0; j < arrayBasePow.Count || multiplyCarry != 0; ++j)
{
    ulong product = 0;
    if (j < arrayBasePow.Count)
    {
        product = (ulong)value[i] * arrayBasePow[j];
    }
    multiplyResult[j] += product + multiplyCarry;
    multiplyCarry = multiplyResult[j] >> 32;
    multiplyResult[j] &= 0xFFFFFFFF;
}

The overflow part is handled in the last two lines by simply treating everything after the low 32 bits as the overflow since the big integer we’re building only uses the low 32 bits of each array element. It should be noticed that the above code won’t handle instances where value[i] is larger than 32 bits. This is okay because the algorithm only needs to support 32 bit unsigned integers. The next part of the operation is accumulating that result into the total sum. Just like multiplication we’ll start with a reminder of what happens in base 10. Adding 42 and 58 will be our example. Columns are added right to left with carries added to the next column. So 8 plus 2 is 10 so 0 with a carry of 1. Then 5 plus 4 plus the carry equals 10 resulting in 0 with a carry of 1. The final column can be interpreted as 0 plus 0 with the carry equals 1 for a final number of 100. Writing this in code turns into:

ulong addCarry = 0;
for (var j = 0; j &lt; sum.Count || addCarry != 0; ++j)
{
    ulong result = 0;
    if (j &lt; sum.Count)
    {
        result = sum[j];
    }
    sum[j] += result + addCarry;
    addCarry = sum[j] &gt;&gt; 32;
    sum[j] &amp;= 0xFFFFFFFF;
}

The final code listed at the end of this article combines both the multiplication and addition into a single loop to calculate the sum. The last operation, arrayBasePow *= arrayBase, is another big integer multiplied by 32 bit integer operation putting the result into arrayBasePow.

4.6.2 Decoding

Decoding involves taking the modulus of the sum by the base to get the array value then dividing the sum by the base. This process is repeated yielding a new array element each time which decodes the whole array.

for (var i = 0; i < array.Length; ++i)
{
	array[i] = (uint)(sum % arrayBase);
	sum /= arrayBase;
}

The first thing that must be done is loading the data from the byte array into the big integer sum. Most of the processing occurs just to calculate the number of bits the array was encoded into. As explained earlier the number of bits required is \lceil log_2(\text{base}^{\text{arraySize}})\rceil so the maximum possible encoded number must first be obtained before taking the logarithm base 2 of it. Performing such a big integer power operation is usually done via the exponentiation by squaring method.

An implementation of an exponentiation by squaring algorithm is listed below:

BigInteger Pow(BigInteger x, int n)
{
	BigInteger result = 1;
	while (n != 0)
	{
		if (n % 2 != 0)
		{
			result *= x;
			n--;
		}
		x *= x;
		n /= 2;
	}
	return result;
}

The two big integer operations happening are result *= x and x *= x. Unlike the previous big integer times 32 bit integer, these are big integer times big integer.

Like before we’ll start with a reminder of what happens in base 10. Multiplying 42 and 58 will be our example. First the 8 is multiplied by the 2 resulting in 16. However, unlike before where we could just put the 16 into the first element of the big integer array, overwriting the previous value that was there, we need to preserve that value for later calculations so we’re forced to allocate a temporary array to hold our calculation and perform the carry calculation with. So we put 16 into the first element of the array and calculate the carry leaving 6 in the element and holding onto the carry of 1. The next calculation is 8 times 4 plus the carry resulting in 33. We put 33 into the second element of our temporary array leaving 3 and holding onto the carry of 3. The last step for that row is essentially like multiplying 8 times 0 plus the carry to get 3 and storing the 3 in the third element of the temporary array. The second set of operations shows a special step that must be performed. Since the results of the second operation will be added to the current results in the temporary array the addition carry must be held onto. So 5 time 2 plus the current value of 3 in the second element is 3 into the second element with a carry of 1. Then 5 times 4 plus the carry plus the current value of 3 resulting in 4 with a carry of 2. The final operation is 5 times 0 plus the carry resulting in 2 in the fourth element for a final result of 2436.

Writing this in code turns into:

for (int i = 0; i < valueBaseTemp.Count; ++i)
{
	ulong carry = 0;
	for (var j = 0; j < powResult.Count || carry != 0; ++j)
	{
		if (i + j >= newPowResult.Count)
		{
			newPowResult.Add(0);
		}
		ulong product = 0;
		if (j < powResult.Count)
		{
			product = valueBaseTemp[i] * powResult[j];
		}
		newPowResult[j + i] += product + carry;
		carry = newPowResult[j + i] >> 32;
		newPowResult[j + i] &= 0xFFFFFFFF;
	}
}
powResult.Clear();
powResult.InsertRange(0, newPowResult);

The same operation is performed for x *= x.

After the the maximum possible integer is found, the log_2 is calculated returning the number of bits. That number of bits is then read and placed into a big integer array. This brings us to the final operation, which is big integer division and modulus with a 32 bit integer.

In base 10 if you are dividing 43 by 3 you can first divide the 4 to get 1 with a remainder of 1. If you prepend that 1 to the 3 you end up with 13. Performing the division then results in 3 with a remainder of 1 for a final result of 13 and a remainder of 1.

Writing this in code turns into:

ulong remainder = 0;
for (var j = sum.Count - 1; j >= 0; --j)
{
	sum[j] += remainder << 32;
	remainder = sum[j] % valueBase;
	sum[j] /= valueBase;
}

When performing division you automatically receive the remainder as a byproduct of performing the division calculation. This works out well since we require both the remainder and result to decode all the elements. This operation must be performed for each element in the array where each remainder represents one of the encoded elements in the order they were encoded.

5 Things to keep in mind

Often people serialize things they don’t need to. Maybe it’s because it’s easier than deciding if the client already knows about them or easier than keeping track of things. As an example if you’re sending every player’s health, mana and stats every turn you have much larger problems that bit packing just can’t fix. Keep track of everything the client knows about. You should never be telling the client things they already know. If an entity walks into a player’s range do a full state update, then on the following updates simply do delta updates. This strategy will greatly reduce your bandwidth.

As a quick lesson, notice in the section on the custom resolution float how it builds information for every entity’s position even if the client might already know about the entity’s position in the radar and nothing is moving. If you knew a lot of players or entities are going to be idle like in a social game you could put an entity ID in and only update entities on the radar that are moving. Also another strategy is to only update the radar every second rather than every frame and stop updating it for a client that is idle for over a minute. Tons of strategies like this will greatly cut down on bandwidth. Remember to actually calculate estimates for everything. Don’t just guess that something is costly or you might end up missing some simple optimization that can make it cost-effective.

The idea of compressing these packets comes up from time to time. Binary packets tend to have very high entropy. Meaning they are very random making it hard to compress most of the data contained in them. Also their size tends to be small which further degrades any chance of compressing them effectively. In short don’t worry about compression. Target larger problems.

6 Conclusion

Binary packets that work on the bit level offer many simple advantages that are nearly transparent to the programmer as they format packets. The methods listed above are some of the most useful to have. Some other tricks or ideas were left out. For instance, this article never mentions methods to seamlessly write mathematical vectors and other basic classes. Some people like to add those in because it saves them time writing the same code for serializing vectors. Other things such as support for arrays and associative arrays were left out for a reason. Writing an array is normally a unique thing. Writing boxed functions to do it might not always be optimal. As a note though putting the length then the sequence of elements is normally the preferred approach.

转载于:https://my.oschina.net/lyr/blog/56298

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值