README / README.md
sqiud's picture
Update README.md
ee72481 verified
|
raw
history blame
1.72 kB

Dataset stats:
lat_mean = 39.951564548022596
lat_std = 0.0006361722351128644
lon_mean = -75.19150880602636
lon_std = 0.000611411894337979

The model can be loaded using:

from huggingface_hub import hf_hub_download
import torch

# Specify the repository and the filename of the model you want to load
repo_id = "FinalProj5190/ImageToGPSproject-resnet_vit-base"  # Replace with your repo name
filename = "resnet_vit_gps_regressor_complete.pth"

model_path = hf_hub_download(repo_id=repo_id, filename=filename)

# Load the model using torch
model_test = torch.load(model_path)
model_test.eval()  # Set the model to evaluation mode

The model implementation is here:

from transformers import ViTModel
class HybridGPSModel(nn.Module):
    def __init__(self, num_classes=2):
        super(HybridGPSModel, self).__init__()
        # Pre-trained ResNet for feature extraction
        self.resnet = resnet18(pretrained=True)
        self.resnet.fc = nn.Identity()

        # Pre-trained Vision Transformer
        self.vit = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')

        # Combined regression head
        self.regression_head = nn.Sequential(
            nn.Linear(512 + self.vit.config.hidden_size, 128),
            nn.ReLU(),
            nn.Linear(128, num_classes) 
        )

    def forward(self, x):
        resnet_features = self.resnet(x)
        vit_outputs = self.vit(pixel_values=x)
        vit_features = vit_outputs.last_hidden_state[:, 0, :]  # CLS token

        combined_features = torch.cat((resnet_features, vit_features), dim=1)

        # Predict GPS coordinates
        gps_coordinates = self.regression_head(combined_features)
        return gps_coordinates