I have been interested in solving many-objective optimization with the inverse model, so that I could manipulate the operation in the objective space other than the decision space. I have tried to use neural network to build a inverse model on some benchmark datasets such as DTLZ and WFG. However, it is failure. The main reason is that that are many same points corresponding to different values from the objective space to the decision space. Due to this phenomenon, such mappings are not functions, therefore, neural networks cannot perfectly fit them.
At that time, I knew one paper focusing on using inverse models to solve multi-objective problems by using Gaussian process. I also tried to learn GP, but it is still failure. The main reason is that I feel such knowledges are very difficult. Until last week, Handing posted a post about GP, which is very easy to understand. With her help, I have known a little of that. Therefore, I read this paper once more, and also go through the source code. I found it is easy to understand now.
In the sampling phase, the author just modify one dimension of the objective values and then use GP to find one dimension values of the decision variables. The updated decision variables are as the offspring. Interesting。。。