keil error: source file is not valid UTF-8

1、keil 编译出现错误:../User/APP/app.c(1): error: source file is not valid UTF-8

编译输出出现一些列乱码

解决方法:

选择红框中的ARM compiler

2、Error: L6406E: No space in execution regions with .ANY selector matching os_cfg_app.o(.bss).

(1)整个工程编译之后的数据量远远超过了芯片的内置flash大小,所以程序放不进芯片flash了,因此我们需要做的是对代码进行高级优化。对keil5设置进行如下更改:

调整keil5里面的这个选项之后,编译器就会对代码进行优化未优化的时候编译器会把整个文件进行编译链接并且放到芯片的flash中,而经过高级优化之后,编译器会自动识别在这个文件中用到的东西才会进行编译,没有用到的部分就不会进行编译了。去掉那些没有用到的数据之后再进行编译链接的话数据量就会小很多

Level0-3的优化等级逐渐提高,但是随着优化等级提高了,程序的可调试性逐渐变差,所以大家谨慎选择哦。
以后大家遇到这种类似的超大型文件的时候也可以尝试通过这个编译选项进行调整哦~

(2)

分析:这个出现的原因是因为芯片RAM空间不足,无法执行程序。通常RAM的空间会比较小,ROM空间相对较大。

 解决方案:尝试修改”Target Options“中右侧RAM的Size,但这个要先查看芯片文档,找到文档中描述的RAM大小,把Size调成最大值。

 

  • 3
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
来源: http://lua-users.org/wiki/LuaUnicode 目录: | LuaUnicode.url | +---0.13A | ICU4Lua-0.13A-src.zip | ICU4Lua-0.13A-win32-dll.zip | \---0.2B ICU4Lua-0.2B-docs.zip ICU4Lua-0.2B-src.zip ICU4Lua-0.2B-win32dll.zip 下面的来源于: http://lua-users.org/wiki/LuaUnicode This is an attempt to answer the LuaFaq : Can I use unicode strings? or Does Lua support unicode? In short, yes and no. Lua gives you the bare bones support and enough rope and not much else. Unicode is a large and complex standard and questions like "does lua support unicode" are extremely vague. Some of the issues are: Can I store and retrieve Unicode strings? Can my Lua programs be written in Unicode? Can I compare Unicode strings for equality? Sorting strings. Pattern matching. Can I determine the length of a Unicode string? Support for bracket matching, bidirectional printing, arbitrary composition of characters, and other issues that arise in high quality typesetting. Lua strings are fully 8-bit clean, so simple uses are supported (like storing and retrieving), but there's no built in support for more sophisticated uses. For a fuller story, see below. Unicode strings and Lua strings A Lua string is an aribitrary sequence of values which have at least 8 bits (octets); they map directly into the char type of the C compiler. (This may be wider than eight bits, but eight bits are guaranteed.) Lua does not reserve any value, including NUL. That means that you can store a UTF-8 string in Lua without problems. Note that UTF-8 is just one option for storing Unicode strings. There are many other encoding schemes, including UTF-16 and UTF-32 and their various big-endian/little-endian variants. However, all of these are simply sequences of octets and can be stored in a Lua string without problems. Input and output of strings in Lua (using the io library) uses C's stdio library. ANSI C does not require the stdio library to handle arbitrary octet sequences unless the file is opened in binary mode; furthermore, in non-binary mode, some octet sequences are converted into other ones (in order to deal with varying end-of-line markers on different platforms). This may affect your ability to do non-binary file input and output of Unicode strings in formats other than UTF-8. UTF-8 strings will probably be safe because UTF-8 does not use control characters such as \n and \r as part of multi-octet encodings. However, there are no guarantees; if you need to be certain, you must use binary mode input and output. (If you do so, line-endings will not be converted.) Unix file IO has been 8-bit clean for a long while. If you are not concerned with portability and are only using Unix and Unix-like operating systems, you can almost certainly not worry about the above. If your use of Unicode is restricted to passing the strings to external libraries which support Unicode, you should be OK. For example, you should be able to extract a Unicode string from a database and pass it to a Unicode-aware graphics library. But see the sections below on pattern matching and string equality. Unicode Lua programs Literal Unicode strings can appear in your lua programs. Either a UTF-8 encoded string can appear directly with 8-bit characters or you can use the \ddd syntax (note that ddd is a decimal number, unlike some other languages). However, there is no facility for encoding multi-octet sequences (such as \U+20B4); you would need to either manually encode them to UTF-8, or insert individual octets in the correct big-endian/little-endian order (for UTF-16 or UTF-32). Unless you are using an operating system in which a char is more than eight bits wide, you will not be able to use arbitrary Unicode characters in Lua identifers (for the names of variables and so on). You may be able to use eight-bit characters outside of the ANSI range. Lua uses the C functions isalpha and isalnum to identify valid characters in identifiers, so it will depend on the current locale. To be honest, using characters outside of the ANSI range in Lua identifiers is not a good idea, since your programs will not compile in the standard C locale. Comparison and Sorting Lua string comparison (using the == operator) is done byte-by-byte. That means that == can only be used to compare Unicode strings for equality if the strings have been normalized in one of the four Unicode normalizations. (See the [Unicode FAQ on normalization] for details.) The standard Lua library does not provide any facility for normalizing Unicode strings. Consequently, non-normalized Unicode strings cannot be reliably used as table keys. If you want to use the Unicode notion of string equality, or use Unicode strings as table keys, and you cannot guarantee that your strings are normalized, then you'll have to write or find a normalization function and use that; this is non-trivial exercise! The Lua comparison operators on strings (< and <=) use the C function strcoll which is locale dependent. This means that two strings can compare in different ways according to what the current locale is. For example, strings will compare differently when using Spanish Traditional sorting to that when using Welsh sorting. It may be that your operating system has a locale that implements the sorting algorithm that you want, in which case you can just use that, otherwise you will have to write a function to sort Unicode strings. This is an even more non-trivial exercise. UTF-8 was designed so that a naive octet-by-octet string comparison of an octet sequence would produce the same result if a naive octet-by-octet string comparison were done on the UTF-8 encoding of the octet sequence. This is also true of UTF-32BE but I do not know of any system which uses that encoding. Unfortunately, naive octet-by-octet comparison is not the collation order used by any language. (Note: sometimes people use the terms UCS-2 and UCS-4 for "two-byte" and four-byte encodings. These are not Unicode standards; they come from the closely corresponding ISO standard ISO/IEC 10646-1:2000 and currently differ in that they allow codes outside of the Unicode range, which runs from 0x0 to 0x10FFFF.) Pattern Matching Lua's pattern matching facilities work character by character. In general, this will not work for Unicode pattern matching, although some things will work as you want. For example, "%u" will not match all Unicode upper case letters. You can match individual Unicode characters in a normalized Unicode string, but you might want to worry about combining character sequences. If there are no following combining characters, "a" will match only the letter a in a UTF-8 string. In UTF-16LE you could match "a%z". (Remember that you cannot use \0 in a Lua pattern.) Length and string indexing If you want to know the length of a Unicode string there are different answers you might want according to the circumstances. If you just want to know how many bytes the string occupies, so that you can make space for copying it into a buffer for example, then the existing Lua function string.len will work. You might want to know how many Unicode characters are in a string. Depending on the encoding used, a single Unicode character may occupy up to four bytes. Only UTF-32LE and UTF-32BE are constant length encodings (four bytes per character); UTF-32 is mostly a constant length encoding but the first element in a UTF-32 sequence should be a "Byte Order Mark", which does not count as a character. (UTF-32 and variants are part of Unicode with the latest version, Unicode 4.0.) Some implementations of UTF-16 assume that all characters are two bytes long, but this has not been true since Unicode version 3.0. Happily UTF-8 is designed so that it is relatively easy to count the number of unicode symbols in a string: simply count the number of octets that are in the ranges 0x00 to 0x7f (inclusive) or 0xC2 to 0xF4 (inclusive). (In decimal, 0-127 and 194-244.) These are the codes which can start a UTF-8 character code. Octets 0xC0, 0xC1 and 0xF5 to 0xFF (192, 193 and 245-255) cannot appear in a conforming UTF-8 sequence; octets in the range 0x80 to 0xBF (128-191) can only appear in the second and subsequent octets of a multi-octet encoding. Remember that you cannot use \0 in a Lua pattern. For example, you could use the following code snippet to count UTF-8 characters in a string you knew to be conforming (it will incorrectly count some invalid characters): local _, count = string.gsub(unicode_string, "[^\128-\193]", "") If you want to know how many printing columns a Unicode string will occupy when you print it out using a fixed-width font (imagine you are writing something like the Unix ls program that formats its output into several columns), then that is a different answer again. That's because some Unicode characters do not have a printing width, while others are double-width characters. Combining characters are used to add accents to other letters, and generally they do not take up any extra space when printed. So that's at least 3 different notions of length that you might want at different times. Lua provides one of them (string.len) the others you'll need to write functions for. There's a similar issue with indexing the characters of a string by position. string.sub(s, -3) will return the last 3 bytes of the string which is not necessarily the same as the last three characters of the string, and may or may not be a complete code. You could use the following code snippet to iterate over UTF-8 sequences (this will simply skip over most invalid codes): for uchar in string.gfind(ustring, "([%z\1-\127\194-\244][\128-\191]*)") do -- something end More sophisticated issues As you might have guessed by now, Lua provides no support for things like bidirectional printing or the proper formatting of Thai accents. Normally such things will be taken care of by a graphics or typography library. It would of course be possible to interface to such a library that did these things if you had access to one. There is a little string-like package [slnunicode] with upper/lower, len/sub and pattern matching for UTF-8. See ValidateUnicodeString for a smaller library. [ICU4Lua] is a Lua binding to ICU (International Components for Unicode [1]), an open-source library originally developed by IBM. See UnicodeIdentifers for platform independent Unicode Lua programs.
### 回答1: 在Keil C语言中,可以使用特定的方法将UTF-8编码的中文转换为可识别的形式。首先,我们需要将UTF-8编码的中文数据存储在合适的变量中(例如字符数组)。然后,可以使用一种编码转换的方法,将UTF-8编码的中文转换为Unicode编码或其他适用的字符编码。 一种常用的方法是使用库函数,例如用于字符串处理的stdio.h和string.h库。通过这些库函数,可以使用特定的函数进行编码转换。例如,可以使用strncpy()函数将UTF-8编码的中文复制到一个新的字符数组中,然后将其显示出来。 另一种方法是使用Unicode转义序列,这是一种用于将Unicode字符插入到C字符串中的方法。通过在字符串中使用"\uXXXX"格式的转义序列,可以将相应的Unicode字符插入到字符串中。例如,"\u4E2D\u6587"将显示为"中文"。 需要注意的是,在任何编码转换过程中,确保编码转换是正确的,以避免出现乱码或错误显示的情况。可以使用适合的库函数来进行验证和调整。 总之,在Keil C语言中,可以通过使用库函数或Unicode转义序列来实现UTF-8中文的转换和显示。具体的方法和函数取决于所使用的库和编码需求。 ### 回答2: 在Keil C语言中,将UTF-8编码的中文转换为可识别的中文字符需要一些处理。首先,我们需要确保Keil C的编码设置为UTF-8,以便正确解析中文字符。接下来,我们需要使用适当的函数来将UTF-8编码的字符转换为Unicode字符。 在Keil C语言中,可以使用类似下面的代码片段来实现UTF-8到Unicode的中文转换: ```c #include <stdio.h> int main() { char utf8[] = {0xE4, 0xBD, 0xA0, 0xE5, 0xA5, 0xBD, 0xE8, 0xAF, 0x95, 0xE6, 0xB1, 0x87, 0xE7, 0xA8, 0x8B, 0xE6, 0x88, 0x91, 0x00}; // UTF-8编码的中文字符串 wchar_t unicode[10]; // 存储转换后的Unicode字符串 // 将UTF-8转换为Unicode swprintf(unicode, sizeof(unicode), L"%hs", utf8); // 输出转换后的Unicode字符串 wprintf(L"%ls\n", unicode); return 0; } ``` 上述代码中,我们定义了一个UTF-8编码的中文字符串`utf8`,然后使用`swprintf`函数将其转换为Unicode字符并存储在`unicode`数组中。最后,通过`wprintf`函数将转换后的Unicode字符串输出。 需要注意的是,Keil C不直接支持中文字符,因此我们需要使用宽字符类型`wchar_t`来存储Unicode字符,并使用`wprintf`函数来输出中文字符。 希望以上内容对您有所帮助! ### 回答3: 在Keil C语言中,可以使用以下步骤将UTF-8编码的中文转换为汉字: 1. 首先,确保Keil C语言编译器支持UTF-8编码。可以在编译器的设置中查看或配置编码选项。 2. 将UTF-8编码的中文字符保存在源代码文件中。 3. 在代码文件中,需要使用合适的函数或库来处理UTF-8编码的中文字符。可以使用标准库函数来处理字符串,如strlen()和strcpy()等。 4. 为了正确显示中文字符,需要在代码文件的开头添加编码声明,如“#pragma execution_character_set("utf-8")”等。 5. 在使用中文字符的地方,需要确保在合适的地方显示中文字符,可以通过printf()函数或其他输出函数来实现。 需要注意的是,Keil C语言使用的是ASCII编码,而不是Unicode编码。因此,虽然可以使用UTF-8编码的中文字符,但在Keil C语言中并不直接支持Unicode字符集。在处理UTF-8编码的中文字符时,可能需要先将其转换为Unicode编码,然后再进行相应的处理与显示。因此,在使用Keil C语言编写处理中文字符的程序时,需要考虑适当的编码转换和处理方式,以确保正确地显示和处理中文字符。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值