mysql中如何查询语言,如何在MySQL中查询包含亚洲语言字符的文本?

I have a MySQL table using the UTF-8 character set with a single column called WORDS of type longtext. Values in this column are typed in by users and are a few thousand characters long.

There are two types of rows in this table:

In some rows, the WORDS value has been composed by English speakers and contains only characters used in ordinary English writing. (Not all are necessarily ASCII, e.g. the euro symbol may appear in some cases.)

Other rows have WORDS values written by speakers of Asian languages (Korean, Chinese, Japanese, and possibly others), which include a mix of English words and words in the Asian languages using their native logographic characters (and not, for example, Japanese romaji).

How can I write a query that will return all the rows of type 2 and no rows of type 1? Alternatively, if that's hard, is there a way to query most such rows (here it's OK if I miss a few rows of type 2, or include a few false positives of type 1)?

Update: Comments below suggest I might do better to avoid the MySQL query engine altogether, as its regex support for unicode doesn't sound too good. If that's true, I could extract the data into a file (using mysql -B -e "some SQL here" > extract.txt) and then use perl or similar on the file. An answer using this method would be OK (but not as good as a native MySQL one!)

解决方案

In theory you could do this:

Find the unicode ranges that you want to test for.

Manually encode the start and end into UTF-8.

Use the first byte of each of the encoded start and end as a range for a REGEXP.

I believe that the CJK range is far enough removed from things like the euro symbol that the false positives and false negatives would be few or none.

Edit: We've now put theory into practice!

Step 1: Choose the character range. I suggest \u3000-\u9fff; easy to test for, and should give us near-perfect results.

Step 2: Encode into bytes. (Wikipedia utf-8 page)

For our chosen range, utf-8 encoded values will always be 3 bytes, the first of which is 1110xxxx, where xxxx is the most significant four bits of the unicode value.

Thus, we want to mach bytes in the range 11100011 to 11101001, or 0xe3 to 0xe9.

Step 3: Make our regexp using the very handy (and just now discovered by me) UNHEX function.

SELECT * FROM `mydata`

WHERE `words` REGEXP CONCAT('[',UNHEX('e3'),'-',UNHEX('e9'),']')

Just tried it out. Works like a charm. :)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值