Skip to content
English
  • There are no suggestions because the search field is empty.

The Fluent GPU solver starts to slow down as the number of CPU cores used for preprocessing increases.

By default, a one-to-one CPU-to-GPU mapping is used, where each CPU partition is mapped to a single GPU instance. Each GPU can support multiple instances, depending on the number of CPU partitions and available GPUs. To avoid GPU memory overloading,

 
 A diagram of a computer network

AI-generated content may be incorrect.
 
 
Answer 

You can overcome this by enabling the GPU remap option in the Parallel settings tab when you're launching Fluent. This option is available only when the Native GPU Solver solution is selected. 

 A screenshot of a computer

AI-generated content may be incorrect.
 

This option will remap the different cores to a single GPU or multiple GPU depending of the situation.  

A diagram of a server

AI-generated content may be incorrect.

Ansys link: https://ansyshelp.ansys.com/public/account/secured?returnurl=//Views/Secured/corp/v252/en/flu_ug/flu_ug_sec_gpu_solver_starting.html