sqiud commited on
Commit
c9573ac
·
verified ·
1 Parent(s): 12c6184

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Dataset stats: \
2
+ lat_mean = 39.951564548022596 \
3
+ lat_std = 0.0006361722351128644 \
4
+ lon_mean = -75.19150880602636 \
5
+ lon_std = 0.000611411894337979
6
+
7
+ The model can be loaded using:
8
+ ```
9
+ from huggingface_hub import hf_hub_download
10
+ import torch
11
+
12
+ # Specify the repository and the filename of the model you want to load
13
+ repo_id = "FinalProj5190/ImageToGPSproject-vit-base" # Replace with your repo name
14
+ filename = "resnet_gps_regressor_complete.pth"
15
+
16
+ model_path = hf_hub_download(repo_id=repo_id, filename=filename)
17
+
18
+ # Load the model using torch
19
+ model_test = torch.load(model_path)
20
+ model_test.eval() # Set the model to evaluation mode
21
+ ```
22
+
23
+ The model implementation is here:
24
+ ```
25
+ class MultiModalModel(nn.Module):
26
+ def __init__(self, num_classes=2):
27
+ super(MultiModalModel, self).__init__()
28
+ self.vit = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
29
+
30
+ # Replace for regression instead of classification
31
+ self.regression_head = nn.Sequential(
32
+ nn.Linear(self.vit.config.hidden_size, 512),
33
+ nn.ReLU(),
34
+ nn.Linear(512, num_classes)
35
+ )
36
+
37
+ def forward(self, x):
38
+ outputs = self.vit(pixel_values=x)
39
+ # Take the last hidden state (CLS token embedding)
40
+ cls_output = outputs.last_hidden_state[:, 0, :]
41
+ # Pass through the regression head
42
+ gps_coordinates = self.regression_head(cls_output)
43
+ return gps_coordinates
44
+ ```