sqiud's picture
Update README.md
af69a4f verified

Dataset stats:
lat_mean = 39.951564548022596
lat_std = 0.0006361722351128644
lon_mean = -75.19150880602636
lon_std = 0.000611411894337979

The model can be loaded using:

from huggingface_hub import hf_hub_download
import torch

# Specify the repository and the filename of the model you want to load
repo_id = "FinalProj5190/ImageToGPSproject_new_vit"  # Replace with your repo name
filename = "resnet_gps_regressor_complete.pth"

model_path = hf_hub_download(repo_id=repo_id, filename=filename)

# Load the model using torch
model_test = torch.load(model_path)
model_test.eval()  # Set the model to evaluation mode

The model implementation is here:

class MultiModalModel(nn.Module):
    def __init__(self, num_classes=2):
        super(MultiModalModel, self).__init__()
        self.vit = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
        
        # Replace for regression instead of classification
        self.regression_head = nn.Sequential(
            nn.Linear(self.vit.config.hidden_size, 512),
            nn.ReLU(),
            nn.Linear(512, num_classes)
        )
    
    def forward(self, x):
        outputs = self.vit(pixel_values=x)
        # Take the last hidden state (CLS token embedding)
        cls_output = outputs.last_hidden_state[:, 0, :]
        # Pass through the regression head
        gps_coordinates = self.regression_head(cls_output)
        return gps_coordinates