CUDA error, while fine-tuning llama 3 8B


got this error after, this much completion

I am using unsloth’s FastLanguageModel, hugging-face gated model
if you need more info, please tell me, i will share immediately, And i am a newbie, first-time training a LLM.

Hi @arpit_bansal, From the given error I suspect that the GPU memory is not sufficient for using llama3 8b model or the batch size you are passing might be causing the issue. Thank You.