We are currently trying to optimize a system in which there are at least 12 variables. Total comibination of these variable is over 1 billion. This is not deep learning or machine learning or Tensorflow or whatsoever but arbitrary calculation on time series data.
We have implemented our code in Python and successfully run it on CPU. We also tried multiprocessing which also works well but we need faster computation since calculation takes weeks. We have a GPU system consisting of 6 AMD GPUs. We would like to run our code on this GPU system but do not know how to do so.
My questions are:
Can we run our simple Python code on my AMD supported laptop?
Can we run the same app on our GPU system?
We read that we need to adjust the code for GPU computation but we do not know how to do that.
PS: I can add more information if you need. I tried to keep the post as