split和tokenizer都可以分割字符串
- split需要开辟空间vector
std::string sChannel(0;1;2);
std::vector<std::string> vsChannel;
boost::split(vsChannel, sChannel, boost::is_any_of(";"), boost::token_compress_on);
for (auto it=vsChannel.begin(); it!=vsChannel.end(); ++it)
{
int64_t iChannel = boost::lexical_cast<int64_t>(*it);
vlChannel.push_back(iChannel);
}`
- tokenizer不要开辟空间,使用迭代器获取内容
typedef boost::tokenizer<boost::char_separator<char>> tokenizer;
boost::char_separator<char> sep(";");
tokenizer mytokens(sChannel,sep);
for (tokenizer::iterator tok_iter = mytokens.begin(); tok_iter != mytokens.end(); ++ tok_iter)
{
int64_t iChannel = boost::lexical_cast<int64_t>(*tok_iter);
vlChannel.push_back(iChannel);
}
- 当输入字符串位“0;1;2;”时,split分割会多出一个空字符,而token则不会出现空字符。