Working environment for running MedGemma (PyTorch/Transformers versions, CUDA, etc.)

Hi everyone,

I’m trying to get MedGemma running reliably and I’m running into version pinning issues (mainly around torch, transformers, and accelerator). Could folks who have MedGemma working end-to-end share their exact environment? Thanks!

1 Like

MedGemma should work with the current latest versions of transformers, torch, and accelerate:

accelerate==1.11.0
certifi==2025.11.12
charset-normalizer==3.4.4
filelock==3.20.0
fsspec==2025.10.0
hf-xet==1.2.0
huggingface-hub==0.36.0
idna==3.11
Jinja2==3.1.6
MarkupSafe==3.0.3
mpmath==1.3.0
networkx==3.5
numpy==2.3.4
nvidia-cublas-cu12==12.8.4.1
nvidia-cuda-cupti-cu12==12.8.90
nvidia-cuda-nvrtc-cu12==12.8.93
nvidia-cuda-runtime-cu12==12.8.90
nvidia-cudnn-cu12==9.10.2.21
nvidia-cufft-cu12==11.3.3.83
nvidia-cufile-cu12==1.13.1.3
nvidia-curand-cu12==10.3.9.90
nvidia-cusolver-cu12==11.7.3.90
nvidia-cusparse-cu12==12.5.8.93
nvidia-cusparselt-cu12==0.7.1
nvidia-nccl-cu12==2.27.5
nvidia-nvjitlink-cu12==12.8.93
nvidia-nvshmem-cu12==3.3.20
nvidia-nvtx-cu12==12.8.90
packaging==25.0
pillow==12.0.0
pip==25.3
psutil==7.1.3
PyYAML==6.0.3
regex==2025.11.3
requests==2.32.5
safetensors==0.6.2
setuptools==80.9.0
sympy==1.14.0
tokenizers==0.22.1
torch==2.9.0
tqdm==4.67.1
transformers==4.57.1
triton==3.5.0
typing_extensions==4.15.0
urllib3==2.5.0

I’m working on a hackathon with MedGemma and I’m running in this issue as well: GitHub - ODSCGoogleHackhathon/googol at feat/medgemma

I don’t think we can make the model much light weight with the requirements in general. Here is the requirements I’m using for the model:

```txt

pydantic==2.10.5

pydantic-settings==2.7.0

python-dotenv==1.0.1

# Image Processing

Pillow==11.0.0

# Utilities

aiofiles==24.1.0

httpx==0.28.1

# MedGemma - Heavy ML Stack

transformers>=4.45.0

torch>=2.0.0
```