广度搜索是十分基本的搜索规则,就是从初始节点开始一层层扩展到目标节点,但它只能较好地解决状态不太多的问题,一旦搜索量巨大,往往会出现内存空间不够用的状况。双向广度搜索就是对广度搜索的改进,减少空间和时间上的复杂度。
有些问题按照广度搜索进行节点扩展,即适合逆序,也适合顺序,于是可以将单向搜索变为双向,初始节点向目标节点以及目标节点向初始节点,当两个扩展方向上出现同一个节点,搜索结束。
例如:移动一个只含字母A和B的字符串中的字母,给定初始状态为(a)表,目标状态为(b)表,给定移动规则为:只能互相对换相邻字母。请找出一条移动最少步数的办法。
[AABBAA] [BAAAAB] (a) (b)
双向扩展结点:
顺序 逆序
1 1
___AABBAA___ BAAAAB
2 / / 3 2 / / 3
__ABABAA__ AABABA ABAAAB BAAABA
4 / |5 / 6 7 / / 8 4 /
ABBAAA BAABAA ABAABA AAABBA AABAAB AABAAB
(a) 图1 (b)
顺序扩展的第8个子结点与逆序扩展得到的第4个子结点就是相交点,问题的最佳路径如图2。
[AABBAA]—[AABABA]—[AABAAB]—[ABAAAB]—[BAAAAB]
图2
对于双向扩展顺序上的选择:由于大部分的解答树不是完全树,在扩展完一层后,下一层则选择节点个数较少的那个方向先扩展。
下面就以一道经典的Leetcode算法题为例:
Given two words (beginWord and endWord), and a dictionary's word list, find all shortest transformation sequence(s) frombeginWord to endWord, such that:
- Only one letter can be changed at a time
- Each transformed word must exist in the word list. Note thatbeginWord is not a transformed word.
For example,
Given:
beginWord = "hit"
endWord = "cog"
wordList = ["hot","dot","dog","lot","log","cog"]
Return
[
["hit","hot","dot","dog","cog"],
["hit","hot","lot","log","cog"]
]
Note:
- Return an empty list if there is no such transformation sequence.
- All words have the same length.
- All words contain only lowercase alphabetic characters.
- You may assume no duplicates in the word list.
- You may assume beginWord and endWord are non-empty and are not the same.
UPDATE (2017/1/20):
The wordList parameter had been changed to a list of strings (instead of a set of strings). Please reload the code definition to get the latest changes.
该题的解题思路:通过DFS找寻到endWord所在的层,那么该长度就为最短路径长度。DFS期间通过一个map映射来记录每个节点以及其可达的下一节点的逆映射关系。因为如果某一节点先出现,那么该节点后出现并达到endWord的长度必然更大,所以每次出现一个节点需将其从wordList中删除。如果进行优化?那就是使用双向DFS来代替DFS,最终的结果显示效率提升了3倍,基于双向DFS的解法代码如下:
class Solution {
public:
vector<string> tmp_path;
vector<vector<string>> result_path;
vector<vector<string>> findLadders(string beginWord, string endWord, vector<string>& wordList) {
unordered_set<string> front,back,next;
unordered_map<string,unordered_set<string>> path;
unordered_set<string> dict;
for(int i=0;i<wordList.size();i++)
dict.insert(wordList[i]);
// dict.erase(beginWord);
//dict.erase(endWord);
front.insert(beginWord);
if(dict.count(endWord))
back.insert(endWord);
bool done = false;
while(done == false && dict.size() > 0)
{
if(front.size() < back.size())
{
for(auto it = front.begin();it!=front.end();it++)
dict.erase(*it);
for(auto it = front.begin();it!=front.end();it++)
{
string word = *it;
for(int i=0;i<word.size();i++)
for(char change='a';change<='z';change++)
{
string tmp =word;
tmp[i] = change;
if(back.count(tmp))
{
done = true;
path[word].insert(tmp);
}
else if(done == false && dict.count(tmp))
{
next.insert(tmp);
path[word].insert(tmp);
}
}
}
front = next;
}
else
{
for(auto it = back.begin();it!=back.end();it++)
dict.erase(*it);
for(auto it = back.begin();it!=back.end();it++)
{
dict.erase(*it);
string word = *it;
for(int i=0;i<word.size();i++)
for(char change='a';change<='z';change++)
{
string tmp =word;
tmp[i] = change;
if(front.count(tmp))
{
done = true;
path[tmp].insert(word);
}
else if(done == false && dict.count(tmp))
{
next.insert(tmp);
path[tmp].insert(word);
}
}
}
back = next;
}
if(next.empty())
break;
next.clear();
}
if(done == true)
generatePath(path,beginWord,endWord);
return result_path;
}
void generatePath(unordered_map<string,unordered_set<string>>& path,string start,string end)
{
tmp_path.push_back(start);
if(start == end)
{
result_path.push_back(tmp_path);
return;
}
for(auto it =path[start].begin();it != path[start].end();it++)
{
generatePath(path,*it,end);
tmp_path.pop_back();
}
}
};