Release of Data and Code for Attention Map Analysis

#221
by katarinayuan - opened

Hi, could you please release the data and code for attention weight analysis, so that people can reproduce it?

Thank you for your interest in Geneformer! To extract attention weights, you can load the model with the function "load_model" in the in silico perturber with changing output_attentions to True (and also you can change output_hidden_states to False to save memory if you are not interested in extracting embeddings). For example:

    model = BertForMaskedLM.from_pretrained(model_directory, 
                                            output_hidden_states=False, 
                                            output_attentions=True)

Then, you can extract the attention weights from the output of "forward_pass_single_cell" in the in silico perturber.

The data is publicly available - please see the reference in the manuscript.

ctheodoris changed discussion status to closed

Sign up or log in to comment