muellerzr HF staff commited on
Commit
a6ae7ff
1 Parent(s): 7efa62d
Files changed (1) hide show
  1. app.py +2 -0
app.py CHANGED
@@ -127,6 +127,8 @@ with gr.Blocks() as demo:
127
  on a model hosted on the 🤗 Hugging Face Hub. The minimum recommended vRAM needed for a model
128
  is denoted as the size of the "largest layer", and training of a model is roughly 4x its size (for Adam).
129
 
 
 
130
  When performing inference, expect to add up to an additional 20% to this as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/).
131
  More tests will be performed in the future to get a more accurate benchmark for each model.
132
 
 
127
  on a model hosted on the 🤗 Hugging Face Hub. The minimum recommended vRAM needed for a model
128
  is denoted as the size of the "largest layer", and training of a model is roughly 4x its size (for Adam).
129
 
130
+ These calculations are accurate within a few percent at most, such as `bert-base-cased` being 413.68 MB and the calculator estimating 413.18 MB.
131
+
132
  When performing inference, expect to add up to an additional 20% to this as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/).
133
  More tests will be performed in the future to get a more accurate benchmark for each model.
134