Increased RAM, but still says insufficient memory

  • 2K Views
  • Last Post 07 May 2019
abcdandy posted this 03 May 2019

Hello 

I have been having issues with the memory for quite some time due to my comprehensive model. It has quite large at 2.5 million nodes, but the details and contact pairs necessitates a fine mesh.

To mitigate the issue, I have already tried the following

- running iterative solver

- running out of core

- increasing virtual memory

- using msave

however, even after all this I still get the error message: 

"There is not enough memory for the Sparse Matrix Solver used by the PCG 

 solver to proceed using the out-of-core memory mode.  The total memory  

 required by all processes = 186543 MB.  The total physical memory that  

 is available on the system = 115895 MB.  Please decrease the model      

 size, or run this model on another system with more physical memory."

 

Currently, I have 125GB physical memory and just added 80GB swap memory. However, the error message only reflects the physical memory while I have ample hardware to meet the requirement of 186543MB. 

I already allocated all the RAM in 'solve process settings' yet this issue is still persisting.

Please help and thank you,

 

Andy

 

 

Order By: Standard | Newest | Votes
peteroznewman posted this 07 May 2019

I used what you had, which was Initial Substeps 1000.

After it started iterating properly, I stopped the solution.

abcdandy posted this 07 May 2019

I did not know this. I will definitely try with bigger meshes. Yes I was meaning element distortion.

May I ask how many substeps you used?

Also when you say 'it made it to the point of iterating,' did the solver crash then, or did you terminate it?

Thank you always!

peteroznewman posted this 06 May 2019

When you say Material distortion, do you mean element distortion? Element distortion can happen with small elements and large elements.  Making smaller elements does not automatically resolve element distortion problems. Applying the load in smaller increments is typically the best way to avoid element distortion. But once the load has slowly ramped up to a significant value, the elements can become too distorted to continue no matter how slowly the load increments. That is when remeshing may be required, but there is a better approach...

I have had models where the elements were created with a high aspect ratio, in anticipation of being distorted to a lower aspect ratio as the solution progressed. If the elements had started out with an aspect ratio of 1 (perfect cube), the load would not have gotten very large before the solver stopped with a distorted element error. With tall elements at the beginning, the load was able to be increased to its full value without a distorted element error. Using larger elements actually made it easier to mesh these tall elements. Smaller elements would not have been as successful.  The same can be said for pre-skewed elements that experience shear. They become less skewed as the shear load increases.

I ran the full model in 18.2 with all the small element sizes tripled, using the Iterative solver, and it made it to the point of iterating. The computer is using 113 GB of RAM.  The Direct solver is preferred if the model will solve incore.

Time at end of element matrix formulation CP = 7754.63818.              

 Memory resident data base increased from      8192 MB to     16384 MB.

 ALL CURRENT ANSYS DATA WRITTEN TO FILE NAME= file.rdb
  FOR POSSIBLE RESUME FROM THIS POINT
     FORCE CONVERGENCE VALUE  =   119.0      CRITERION=  0.6074    
     MOMENT CONVERGENCE VALUE =   4.150      CRITERION=  0.2117E-01
 curEqn=  16340  totEqn=  16340 Job CP sec=   7786.977
      Factor Done= 100% Factor Wall sec=      0.092 rate=     260.2 Mflops
 Iteration=    10 Ratio=  0.172054     Limit=  1.000000E-08 Wall=     6.9
 Iteration=   105 Ratio=  3.345729E-03 Limit=  1.000000E-08 Wall=   111.0
 Iteration=   160 Ratio=  2.004384E-03 Limit=  1.000000E-08 Wall=   173.0

abcdandy posted this 06 May 2019

Hi Peter

Yes, the contact pairs necessitates those fine meshes. It will experience material distortion past 0.3 mm. 

What would you recommend as a next step?

Thank you very much,

Andy

peteroznewman posted this 06 May 2019

With all the bodies unsuppressed, setting all the small element sizes like 0.25 mm up to 0.8 mm creates a model that will run incore in 14.5 GB of RAM in the Direct (Sparse) Solver using 15 cores on 2019R1. It doesn't converge on a solution, because some elements become highly distorted, but that is another problem.

I then doubled all the small element sizes instead of setting them to 0.8 mm. The Direct solver will not begin iterations in the 2019R1 release and there is no error in solve.out as to why it didn't start iterating. Something goes wrong as the element sizes get smaller, but I don't know what the problem is.

peteroznewman posted this 05 May 2019

I tried suppressing most of the internal bodies to make sure that the solver would eventually start. Keeping just the 4 cylinders at the bottom, I started that on ANSYS 2019R1 and allowed it 8 cores.

The solver is definitely working on this problem, though will not end up converging. I can see 8 cores being used fully. The computer is using 64 GB of RAM. Here are the last few lines in the Solution Output (solve.out) file.

 Iteration=  2750 Ratio=  1.330395E+33 Limit=  1.000000E-08 Wall=  3500.4
 Iteration=  2810 Ratio=  6.750189E+32 Limit=  1.000000E-08 Wall=  3576.9
 Iteration=  2870 Ratio=  3.321855E+33 Limit=  1.000000E-08 Wall=  3653.3
 Iteration=  2930 Ratio=  4.380177E+34 Limit=  1.000000E-08 Wall=  3729.8
 Iteration=  2990 Ratio=  3.309826E+35 Limit=  1.000000E-08 Wall=  3806.2
 Iteration=  3050 Ratio=  1.183019E+35 Limit=  1.000000E-08 Wall=  3882.6

I'm going to stop that solution now. You can try to suppress half the cylinders and see if that allows the solver to start.

I recommend you reduce the mesh density on the small pads until the model starts running.

abcdandy posted this 05 May 2019

Interesting. My first post had the screenshot that said the total memory required by all processes = 186543 MB so it should've been able to run on your computer. What would you recommend as the next step?

peteroznewman posted this 05 May 2019

I tried solving on 18.2 your model on a computer with 192 GB of RAM, requested 15 cores. The only changes I made to your model is I set the Solver type to Iterative (PCG) instead of Direct (Sparse) and I removed the Command Object named shared out of core.  I let it run for 6 hours.

It has used 148 GB or 77% of the RAM and doesn't seem to be solving anymore.

The files on disk are < 9 GB.

The last few lines of solve.out are shown below:

ELEMENT TYPE  323 IS SHELL281. IT IS ASSOCIATED WITH ELASTOPLASTIC 
 MATERIALS ONLY. KEYOPT(8)=2 IS SUGGESTED AND HAS BEEN RESET.
  KEYOPT(1-12)=    0    0    0    0    0    0    0    2    0    0    0    0

 ELEMENT TYPE  324 IS SHELL281. IT IS ASSOCIATED WITH ELASTOPLASTIC 
 MATERIALS ONLY. KEYOPT(8)=2 IS SUGGESTED AND HAS BEEN RESET.
  KEYOPT(1-12)=    0    0    0    0    0    0    0    2    0    0    0    0



 *** NOTE ***                            CP =    4610.360   TIME= 00:489
 This nonlinear analysis defaults to using the full Newton-Raphson       
 solution procedure.  This can be modified using the NROPT command.      

 SOLUTION MONITORING INFO IS WRITTEN TO FILE= file.mntr                                                                                                                                                                                                                                                           

 *** WARNING ***                         CP =    6576.456   TIME= 01:17:25
 Material property EX of material 155 of element 259042 is evaluated at  
 a temperature of 22, which is below the supplied temperature range.     
 Temperature range checking terminates.                                  
 *WARNING*: Some MPC/Lagrange based elements (e.g.73784041) in real      
 constant set 726 overlap with other MPC/Lagrange based elements         
 (e.g.73786553) in real constant set 728 which can cause                 
 overconstraint.                                                         

I think the solver is still running. It doesn't look like it, but it didn't run out of RAM or disk space.

When I issued the Stop command on the Solution control dialog, it stopped gracefully.

peteroznewman posted this 04 May 2019

It's meshing...

abcdandy posted this 04 May 2019

Hi Peter. Here you are.

https://drive.google.com/file/d/1akF24wlOgwzuLVr_u4eUu2PbGwxWC5Es/view?usp=sharing

peteroznewman posted this 04 May 2019

If you have Gmail, you have a Google drive for holding large attachments. Attach that file to an email to send to anyone. The Google drive link will be created in the body of the email instead of an attachment.  Copy that link and paste it into your reply.

abcdandy posted this 04 May 2019

There seems to be an internal error and I cannot attach it. Is there another method by which I can send to you? The file is 4637 KB. 

Regards

abcdandy posted this 04 May 2019

Hi Peter it is Ansys 18.2.

Thanks again.

peteroznewman posted this 04 May 2019

RMB on Model and Clear Generated Data to delete the mesh.

File > Save.

File > Archive.

That will generate a .wbpz file.

Post a reply and say which version of ANSYS you are on, 19.2 or 2019R1 etc.

After you post a reply, the Attach button will show up on the right. Click that, browse to the file and click Upload.

The .wbpz file must be < 120 MB in size for the upload to succeed.

 

abcdandy posted this 04 May 2019

Hi Peter

I am running on a virtual machine CentOS Linux 7 through VMware and yes, even after a reboot.

That would be appreciated. How would I proceed to do that?

peteroznewman posted this 04 May 2019

I don't know what will fix that. Someone like tsiriaks from ANSYS will have better advice.

Does this happen even after a fresh restart of the computer?

What OS are you running this on?

I can try to run it on my large memory computer if you want to upload an archive.

abcdandy posted this 04 May 2019

The solver just finished and I am getting a similar error. My solve process settings is also attached. I don`t understand why so little memory is allocated when I have sufficient RAM available.

  

peteroznewman posted this 03 May 2019

You are causing your own problems. You have 125 GB of RAM and you allocated 200 GB of RAM for Workspace and 200 GB of RAM for Database. Don't do that, it is the cause of the error. If you are interested, read ANSYS Help on Memory Management. Here is a useful paragraph:

Mechanical APDL memory is divided into two blocks: the database space that holds the current model data and the scratch space that is used for temporary calculation space (used, for example, for forming graphics images and by the solvers). The database space is specified by the -db command line option. The initial allocation of total workspace is specified by the -m command line option. The scratch space is the total workspace minus the database space.

In general, specifying a total workspace (-m) or database memory (-db) setting at startup is no longer necessary. Both the scratch space and database space (64-bit systems only) grow dynamically, provided the memory is available when the memory manager tries to allocate additional memory from the operating system. If the database space is unavailable (or forced not to grow dynamically via a negative -db value), the program automatically uses a disk file (.PAGE) to spill the database to disk.           

I have never had to Manually specify Mechanical APDL solver memory settings on a well equipped computer. The only time I did was to help a student on an old computer with only 2 GB of RAM, while ANSYS specifies the minimum RAM is 4 GB. The default setting for Database is 2 GB of RAM which caused the same error you saw on a computer that only has 2 GB of RAM.  To help that student, I recommended 1 GB of memory for the Database and that allowed that small model to solve.

What is the error you get when you uncheck Manually specify Mechanical APDL solver memory settings?

 

abcdandy posted this 03 May 2019

Hi Peter

I should have clarified. I already allocated ample memory in the settings as seen in the image attached.

Thank you,

 

peteroznewman posted this 03 May 2019

What do you mean by this?

I already allocated all the RAM in 'solve process settings' yet this issue is still persisting.

You may be causing your own problem. Please show the solve process settings you have been using.

Close