Large model loading
Moreover, you can directly place the model on different devices if it doesn’t fully fit in RAM (only works for inference for now). With device_map="auto"
, Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don’t have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map
, low_cpu_mem_usage
is automatically set to True
, so you don’t need to specify it:
Copied
from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map
attribute:
Note: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline
in float16 precision instead of the default float32 precision as done above.
You can do so by loading the weights from the fp16
branch and by telling diffusers
to expect the weights to be in float16 precision:
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=to