Hi, I am working on the project and I want to train a model to detect pneumonia. I am using the chest X-Ray dataset here which is just around 1GB total. I am using an architecture very similar to what we have taught in class and I am feeding in 300 * 300 images in batches of 35. I am running the notebook on Kaggle and almost every time I run out of CPU RAM and vRAM. Can anyone tell me what’s going on?
@colsonxu What is the model that you’re using?
Do you get CUDA OOM error ? If so you have following options
- Choose a smaller model architecture
- Resize images to lower resolution
- Reduce Batch size (Try this step by restarting the kernel, some time it might be cached)
- This is a advanced step which will take some research on what it is and how you implement it to your model. Its called mixed-precision. (I would suggest you to look into this later)
I resolved issue using a less complex model. It turned out that my fully-connected layer is way to large.