|
Deep Supervised Information Bottleneck Hashing for Cross-modal Retrieval |
|
based Computer-aided Diagnosis |
|
Yufeng Shi1, Shuhuang Chen1, Xinge You1*, Qinmu Peng1, Weihua Ou2, Yue Zhao3 |
|
1Huazhong University of Science and Technology |
|
2Guizhou Normal University |
|
3Hubei University |
|
fyufengshi17, shuhuangchen, youxg, pengqinmu [email protected], [email protected], [email protected] |
|
Abstract |
|
Mapping X-ray images, radiology reports, and other medi- |
|
cal data as binary codes in the common space, which can |
|
assist clinicians to retrieve pathology-related data from het- |
|
erogeneous modalities (i.e., hashing-based cross-modal med- |
|
ical data retrieval), provides a new view to promot computer- |
|
aided diagnosis. Nevertheless, there remains a barrier to boost |
|
medical retrieval accuracy: how to reveal the ambiguous se- |
|
mantics of medical data without the distraction of superflu- |
|
ous information. To circumvent this drawback, we propose |
|
Deep Supervised Information Bottleneck Hashing (DSIBH), |
|
which effectively strengthens the discriminability of hash |
|
codes. Specifically, the Deep Deterministic Information Bot- |
|
tleneck (Yu, Yu, and Pr ´ıncipe 2021) for single modality is |
|
extended to the cross-modal scenario. Benefiting from this, |
|
the superfluous information is reduced, which facilitates the |
|
discriminability of hash codes. Experimental results demon- |
|
strate the superior accuracy of the proposed DSIBH com- |
|
pared with state-of-the-arts in cross-modal medical data re- |
|
trieval tasks. |
|
The rapid development of medical technology not only |
|
provides diverse medical examinations but also produces |
|
tremendous amounts of medical data, ranging from X-ray |
|
images to radiology reports. It is an experience-demanding, |
|
time-consuming, and error-prone job to manually assess |
|
medical data and diagnose disease. To reduce the work |
|
burden of physicians and optimize the diagnostic process, |
|
computer-aided diagnosis (CAD) systems including classi- |
|
fier based CAD(Shi et al. 2020; In ´es et al. 2021) and content- |
|
based image retrieval (CBIR) based CAD(Yang et al. 2020; |
|
Fang, Fu, and Liu 2021) have been designed to automatically |
|
identify illness. Although the two types of methods greatly |
|
promote the development of CAD, existing systems ignore |
|
the character of current medical data, which is diverse in |
|
modality and huge in terms of scale. Therefore, we introduce |
|
cross-modal retrieval (CMR) (Wang et al. 2016) techniques |
|
and construct a CMR-based CAD method using semantic |
|
hashing (Wang et al. 2017) to handle the above challenges. |
|
With the help of CMR that projects multimodal data into |
|
the common space, samples from different modalities can be |
|
directly matched without the interference of heterogeneity. |
|
*Contact Author |
|
Accepted by the AAAI-22 Workshop on Information Theory for |
|
Deep Learning (IT4DL).Therefore, CMR-based CAD can not only retrieve the se- |
|
mantically similar clinical profiles in heterogeneous modal- |
|
ities but also provide diagnosis results according to the pre- |
|
vious medical advice. Compared with the classifier-based |
|
CAD that only provides diagnosis results, CMR-based CAD |
|
is more acceptable due to the interpretability brought by |
|
retrieved profiles. Compared with the CBIR-based CAD, |
|
CMR-based CAD wins on its extended sight of multi-modal |
|
data, which meets the requirement of current medical data. |
|
Recently, extensive work on hashing-based CMR that |
|
maps data from different modalities into the same hamming |
|
space, has been done by researchers to achieve CMR (Li |
|
et al. 2018; Zhu et al. 2020; Yu et al. 2021). Due to its com- |
|
pact binary codes and XOR distance calculation, hashing- |
|
based CMR possesses low memory usage and high query |
|
speed (Wang et al. 2017), which is also compatible with |
|
the huge volume of current medical data. In terms of ac- |
|
curacy, the suitable hashing-based solutions for CMR-based |
|
CAD are deep supervised hashing (DSH) methods (Xie et al. |
|
2020; Zhan et al. 2020; Yao et al. 2021). With the guid- |
|
ance of manual annotations, deep supervised methods usu- |
|
ally perform hash code learning based on the original data |
|
via neural networks. Inspired by the information bottle- |
|
neck principle (Tishby, Pereira, and Bialek 1999), the above- |
|
mentioned optimization procedure can be viewed as build- |
|
ing hash code Gabout a semantic label Ythrough samples |
|
in different modalities X=/braceleftbig |
|
X1,X2/bracerightbig |
|
, which can be for- |
|
mulated as: |
|
maxL=I(G;Y)−βI(G;X), (1) |
|
whereI(·;·)represents the mutual information, and βis a |
|
hyper-parameter. As quantified by I(G;Y), current DSH |
|
methods model the semantic annotations to establish pair- |
|
wise, triplet-wise or class-wise relations, and maximize the |
|
correlation between hash codes and the semantic relations. |
|
Despite the consideration of semantic relations, the neglect |
|
ofI(G;X)will result in the retention of redundant infor- |
|
mation in the original data, thus limiting the improvement |
|
of the retrieval accuracy. I(G;X)measures the correlation |
|
between the hash code Gand the data from two modalities |
|
X, which can be used to reduce the superfluous informa- |
|
tion from medical data, and constrain the hash code to grasp |
|
the correct semantics from annotations. Therefore, it can be |
|
expected that the optimization of Eq. (1) can strengthen thearXiv:2205.08365v1 [cs.LG] 6 May 2022discriminability of hash codes, which improves the accuracy |
|
of CMR-based CAD. |
|
To perform CMR-based CAD, we design a novel method |
|
named Deep Supervised Information Bottleneck Hash- |
|
ing (DSIBH), which optimizes the information bottleneck |
|
to strengthen the discriminability of hash codes. Specifi- |
|
cally, to avoid variational inference and distribution assump- |
|
tion, we extend the Deep Deterministic Information Bot- |
|
tleneck (DDIB) (Yu, Yu, and Pr ´ıncipe 2021) from single |
|
modality to the cross-modal scenario for hash code learning. |
|
To summarize, our main contributions are fourfold: |
|
• The cross-modal retrieval technique based on semantic |
|
hashing is introduced to establish computer-aided diag- |
|
nosis systems, which is suitable for the current large- |
|
scale multi-modal medical data. |
|
• A deep hashing method named DSIBH, which optimizes |
|
the hash code learning procedure following the informa- |
|
tion bottleneck principle to reduce the distraction of su- |
|
perfluous information, is proposed for CMR-based CAD. |
|
• To reduce the adverse impact of variational inference |
|
or distribution assumption, the Deep Deterministic In- |
|
formation Bottleneck is elegantly extended to the cross- |
|
modal scenario for hash code learning. |
|
• Experiments on the large-scale multi-modal medical |
|
dataset MIMIC-CXR show that DSIBH can strengthen |
|
the discriminability of hash codes more effectively than |
|
other methods, thus boosting the retrieval accuracy. |
|
Related Work |
|
In this section, representative CAD approaches and hashing- |
|
based solutions of cross-modal retrieval are briefly reviewed. |
|
To make readers easier to understand our work, some knowl- |
|
edge of the DDIB is also introduced. |
|
Computer-aided Diagnosis |
|
CAD approaches generally fall into two types including |
|
classifier-based CAD and CBIR-based CAD. Thanks to the |
|
rapid progress of deep learning, classifier-based CAD meth- |
|
ods (Zhang et al. 2019; de La Torre, Valls, and Puig 2020) |
|
can construct task-specific neural networks to categorize |
|
histopathology images and employ the outcomes as the diag- |
|
nosis. On the other side, CBIR-based CAD can provide clin- |
|
ical evidence since they retrieve and visualize images with |
|
the most similar morphological profiles. According to the |
|
data type of representations, existing CBIR methods can be |
|
divided into continuous value CBIR (Erfankhah et al. 2019; |
|
Zhen et al. 2020) and hashing-based CBIR (Hu et al. 2020; |
|
Yang et al. 2020). In the age of big data, the latter increas- |
|
ingly become mainstream due to the low memory usage and |
|
high query speed brought by hashing. Although substantial |
|
efforts have been made to analyse clinical image, medical |
|
data such as radiology reports in other modalities are ig- |
|
nored. Consequently, CAD is restricted in single modality |
|
and the cross-modal relevance between different modalities |
|
still waits to be explored.Cross-modal Retrieval |
|
Cross-modal hashing has made remarkable progress in han- |
|
dling cross-modal retrieval, and this kind of methods can |
|
be roughly divided into two major types including unsuper- |
|
vised approaches and supervised approaches in terms of the |
|
consideration of semantic information. Due to the absence of |
|
semantic information, the former usually relies on data dis- |
|
tributions to align semantic similarities of different modali- |
|
ties (Liu et al. 2020; Yu et al. 2021). For example, Collec- |
|
tive Matrix Factorization Hashing (Ding et al. 2016) learns |
|
unified hash codes by collective matrix factorization with |
|
a latent factor model to capture instance-level correlations. |
|
Recently, Deep Graph-neighbor Coherence Preserving Net- |
|
work (Yu et al. 2021) extra explores graph-neighbor coher- |
|
ence to describe the complex data relationships. Although |
|
data distributions indeed help to solve cross-modal retrieval |
|
to some extent, one should note that unsupervised methods |
|
fail to manage the high-level semantic relations due to the |
|
neglect of manual annotations. |
|
Supervised hashing methods are thereafter proposed to |
|
perform hash code learning with the guidance of manual |
|
annotations. Data points are encoded to express semantic |
|
similarity such as pair-wise(Shen et al. 2017; Wang et al. |
|
2019), triplet-wise (Hu et al. 2019; Song et al. 2021) or |
|
multi-wise similarity relations(Cao et al. 2017; Li et al. |
|
2018). As an early attempt with deep learning, Deep Cross- |
|
modal Hashing (Jiang and Li 2017) directly encodes origi- |
|
nal data points by minimizing the negative log likelihood of |
|
the cross-modal similarities. To discover high-level semantic |
|
information, Self-Supervised Adversarial Hashing (Li et al. |
|
2018) harnesses a self-supervised semantic network to pre- |
|
serve the pair-wise relationships. Although various relations |
|
have been built between the hash code and the semantic la- |
|
bels, the aforementioned algorithms still suffer from the dis- |
|
traction of superfluous information, which is caused by the |
|
connections between the hash code and the original data, |
|
Consequently, for CMR-based CAD, there remains a need |
|
for a deep hashing method which can reduce the superfluous |
|
information to strengthen the discriminability of hash codes. |
|
Deep Deterministic Information Bottleneck |
|
Despite great efforts to handle the ambiguous semantics of |
|
medical data, the discriminability of hash codes still needs |
|
to be strengthened. To alleviate such limitation, a promising |
|
solution is Deep Deterministic Information Bottleneck (Yu |
|
et al. 2021) that has been proved to reduce the superfluous |
|
information during feature extraction. Before elaborating on |
|
our solution, we introduce basic knowledge on DDIB below. |
|
DDIB intends to adopt a neural network to parameterize |
|
information bottleneck (Tishby, Pereira, and Bialek 1999), |
|
which considers extracting information about a target signal |
|
Ythrough a correlated observable X. The extracted infor- |
|
mation is represented as a variable T. The information ex- |
|
traction process can be formulated as: |
|
maxLIB=I(T;Y)−βI(T;X). (2) |
|
When the above objective is optimized with a neural net- |
|
work,Tis the output of one hidden layer. To update theparameters of networks, the second item in Eq. (2) is cal- |
|
culated with the differentiable matrix-based R ´enyi’sα-order |
|
mutual information: |
|
Iα(X;T) =Hα(X) +Hα(T)−Hα(X,T),(3) |
|
whereHα(·)indicates the matrix-based analogue to R ´enyi’s |
|
α-entropy and Hα(·,·)is the matrix-based analogue to |
|
R´enyi’sα-order joint-entropy. More details of the matrix- |
|
based R ´enyi’sα-order entropy functional can be found in |
|
(Yu et al. 2019). |
|
For the first item in Eq. (2), since I(T;Y) =H(Y)− |
|
H(Y|T), the maximization of I(T;Y)is converted to |
|
the minimization of H(Y|T). Given the training set |
|
{xi,yi}N |
|
i=1, the average cross-entropy loss is adopted to |
|
minimize the H(Y|T): |
|
1 |
|
NN/summationdisplay |
|
i=1Et∼p(t|xi)[−logp(yi|t)], (4) |
|
Therefore, DDIB indicates that the optimization of Infor- |
|
mation Bottleneck in single modality can be achieved with |
|
a cross-entropy loss and a differentiable mutual informa- |
|
tion itemI(T;X). Obviously, the differentiable optimiza- |
|
tion strategy of information bottleneck in DDIB can benefit |
|
DSH methods in terms of superfluous information reduction. |
|
Method |
|
In this section, we first present the problem definition, and |
|
then detail our DSIBH method. The optimization is finally |
|
given. For illustration purposes, our DSIBH is applied in X- |
|
ray images and radiology reports. |
|
Notation and problem definition |
|
Matrix and vector used in this paper are represented by bold- |
|
face uppercase letter (e.g., G) and boldface lowercase let- |
|
ter (e.g., g) respectively./bardbl·/bardbldenotes the 2-norm of vectors. |
|
sign(·)is defined as the sign function, which outputs 1 if its |
|
input is positive else outputs -1. |
|
LetX1=/braceleftbig |
|
x1 |
|
i/bracerightbigN |
|
i=1andX2=/braceleftbig |
|
x2 |
|
j/bracerightbigN |
|
j=1symbolize X- |
|
ray images and radiology reports in the training set, where |
|
x1 |
|
i∈Rd1,x2 |
|
j∈Rd2. Their semantic labels that indicate |
|
the existence of pathology are represented by Y={yl}N |
|
l=1, |
|
where yl={yl1,yl2,...,yld3}∈Rd3. Following (Cao et al. |
|
2016; Jiang and Li 2017; Li et al. 2018), we define the se- |
|
mantic affinities SN×Nbetween x1 |
|
iandx2 |
|
jusing semantic |
|
labels. If x1 |
|
iandx2 |
|
jshare at least one category label, they |
|
are semantically similar and Sij= 1. Otherwise, they are |
|
semantically dissimilar and thus Sij= 0. |
|
The goal of the proposed DSIBH is to learn hash functions |
|
f1/parenleftbig |
|
θ1;X1/parenrightbig |
|
:Rd1→Rdcandf2/parenleftbig |
|
θ2;X2/parenrightbig |
|
:Rd2→Rdc, |
|
which can map X-ray images and radiology reports as ap- |
|
proximate binary codes G1andG2in the same continuous |
|
space respectively. Later, binary codes can be generated by |
|
applying a sign function to G1,2. |
|
Meanwhile, hamming distance D/parenleftbig |
|
g1 |
|
i,g2 |
|
j/parenrightbig |
|
between hash |
|
codes g1 |
|
iandg2 |
|
jneeds to indicate the semantic similaritySijbetween x1 |
|
iandx2 |
|
j, which can be formulated as: |
|
Sij∝−D/parenleftbig |
|
g1 |
|
i,g2 |
|
j/parenrightbig |
|
. (5) |
|
Information Bottleneck in Cross-modal Scenario |
|
To improve the accuracy of CMR-based CAD, the super- |
|
fluous information from the medical data in the hash code |
|
learning procedure should be reduced via the information |
|
bottleneck principle. Therefore, the information bottleneck |
|
principle in single modality should be extended to the cross- |
|
modal scenario, where one instance can own descriptions in |
|
different modalities. |
|
Analysis starts from the hash code learning processes for |
|
X-ray images and radiology reports respectively. Follow- |
|
ing the information bottleneck principle, the basic objective |
|
functions can be formulated as: |
|
maxLIB1=I/parenleftbig |
|
G1;Y1/parenrightbig |
|
−βI/parenleftbig |
|
G1;X1/parenrightbig |
|
, |
|
maxLIB2=I/parenleftbig |
|
G2;Y2/parenrightbig |
|
−βI/parenleftbig |
|
G2;X2/parenrightbig |
|
. (6) |
|
In cross-modal scenario, X-ray images and radiology re- |
|
ports in the training set are collected to describe the com- |
|
mon pathology. Therefore, the image-report pairs own the |
|
same semantic label. To implement this idea, Eq. (6) is trans- |
|
formed to: |
|
maxLIB1=I/parenleftbig |
|
G1;Y/parenrightbig |
|
−βI/parenleftbig |
|
G1;X1/parenrightbig |
|
, |
|
maxLIB2=I/parenleftbig |
|
G2;Y/parenrightbig |
|
−βI/parenleftbig |
|
G2;X2/parenrightbig |
|
, (7) |
|
Furthermore, the same hash code should be assigned for the |
|
paired samples to guarantee the consistency among different |
|
modalities, which is achieved with the /lscript2loss: |
|
minLCONS =E/bracketleftBig/vextenddouble/vextenddoubleg1−gy/vextenddouble/vextenddouble2/bracketrightBig |
|
+E/bracketleftBig/vextenddouble/vextenddoubleg2−gy/vextenddouble/vextenddouble2/bracketrightBig |
|
,(8) |
|
whereGyrepresents the modality-invariant hash codes for |
|
the image-report pairs. |
|
Incorporating Eq. (7) and Eq. (8), the overall objective of |
|
the information bottleneck principle in cross-modal scenario |
|
is formulated as: |
|
maxLIBC=/parenleftbig |
|
I/parenleftbig |
|
G1;Y/parenrightbig |
|
+I/parenleftbig |
|
G2;Y/parenrightbig/parenrightbig |
|
(9) |
|
−β/parenleftbig |
|
I/parenleftbig |
|
G1;X1/parenrightbig |
|
+I/parenleftbig |
|
G2;X2/parenrightbig/parenrightbig |
|
−γ/parenleftBig |
|
E/bracketleftBig/vextenddouble/vextenddoubleg1−gy/vextenddouble/vextenddouble2/bracketrightBig |
|
+E/bracketleftBig/vextenddouble/vextenddoubleg2−gy/vextenddouble/vextenddouble2/bracketrightBig/parenrightBig |
|
. |
|
Deep Supervised Information Bottleneck Hashing |
|
Following the information bottleneck principle in cross- |
|
modal scenario (i.e., Eq. (9)), three variables including |
|
Gy,G1andG2should be optimized. To obtain modality- |
|
invariantGy, we build labNet fyto directly transform se- |
|
mantic labels into the pair-level hash codes. The labNet is |
|
formed by a two-layer Multi-Layer Perception (MLP) whose |
|
nodes are 4096 and c. Then, we build imgNet f1and txtNet |
|
f2as hash functions to generate hash codes G1andG2. For |
|
X-ray images, we modify CNN-F (Chatfield et al. 2014) |
|
to build imgNet with the consideration of network scale. |
|
To obtaincbit length hash codes, the last fully-connectedlayer in the origin CNN-F is changed to a c-node fully- |
|
connected layer. For radiology reports, we first use the multi- |
|
scale network in (Li et al. 2018) to extract multi-scale fea- |
|
tures and a two-layer MLP whose nodes are 4096 and cto |
|
transform them into hash codes. Except the activation func- |
|
tion of last layers is tanh to approximate the sign (·)func- |
|
tion, other layers use ReLU as activation functions. To im- |
|
prove generalization performance, Local Response Normal- |
|
ization (LRN) (Krizhevsky, Sutskever, and Hinton 2012) is |
|
applied between layers of all MLPs. One should note that the |
|
application of CNN-F (Chatfield et al. 2014) and multi-scale |
|
network (Li et al. 2018) is only for illustrative purposes; any |
|
other networks can be integrated into our DSIBH as back- |
|
bones of imgNet and txtNet. |
|
As described before, semantic labels are encoded as hash |
|
codesGy. To preserve semantic similarity, the loss function |
|
of labNet is: |
|
min |
|
Gy,θyLy=Ly |
|
1+ηLy |
|
2 (10) |
|
=−N/summationdisplay |
|
l,j/parenleftbig |
|
Slj∆lj−log/parenleftbig |
|
1 +e∆lj/parenrightbig/parenrightbig |
|
+ηN/summationdisplay |
|
l=1/parenleftBig |
|
/bardblgy |
|
l−fy(θy;yl)/bardbl2/parenrightBig |
|
, |
|
s.t.Gy={gy |
|
l}N |
|
l=1∈{− 1,1}c |
|
where∆lj=fy(θy;yl)Tfy/parenleftbig |
|
θy;yj/parenrightbig |
|
,fy(θy;yl)is the |
|
output of labNet for yl,gy |
|
lis the hash codes of fy(θy;yl) |
|
handled bysign (·), andηaims to adjust the weight of loss |
|
items. |
|
The first term of Eq. (10) intends to minimize the nega- |
|
tive log likelihood of semantic similarity with the likelihood |
|
function, which is defined as follows: |
|
p/parenleftbig |
|
Slj|fy(θy;yl),fy/parenleftbig |
|
θy;yj/parenrightbig/parenrightbig |
|
=/braceleftbiggσ(∆lj) Slj= 1 |
|
1−σ(∆lj)Slj= 0, |
|
(11) |
|
whereσ(∆lj) =1 |
|
1+e−∆ljis the sigmoid function. Mean- |
|
while, the second term restricts the outputs of labNet to ap- |
|
proximate binary as the request of hash codes. |
|
After the optimization of labNet, the modality-invariant |
|
hash codeGyis obtained. The next step is to optimize the |
|
imgNet and txtNet to generate G1andG2respectively fol- |
|
lowing Eq. (9). For the first item in Eq. (9), DDIB interprets |
|
it as a cross-entropy loss (i.e., Eq. (4)). In our implement, Gy |
|
is also used as class-level weight in the cross-entropy loss, |
|
which intends to make G1andG2inherent the semantic sim- |
|
ilarity of the modality-invariant hash code. Specifically, the |
|
non-redundant multi-label annotations are transformed into |
|
Ny-class annotations{¯yl}Ny |
|
l=1, and their corresponding hash |
|
codes are regarded as the class-level weights. The weightedcross-entropy loss is formulated as: |
|
min |
|
θmLm |
|
1=−1 |
|
NN/summationdisplay |
|
iNy/summationdisplay |
|
l¯yllog (ail), (12) |
|
ail,exp/parenleftBig |
|
(¯gy |
|
l)Tgm |
|
i/parenrightBig |
|
/summationtextNy |
|
l/primeexp/parenleftBig |
|
(¯gy |
|
l/prime)Tgm |
|
i/parenrightBig, |
|
wheremindicates the modality 1 or 2. |
|
For the second item in Eq. (9), we adopt the differen- |
|
tiable matrix-based R ´enyi’sα-order mutual information to |
|
estimate: |
|
min |
|
θmLm |
|
2=I(Gm;Xm). (13) |
|
For the third item in Eq. (9), the /lscript2loss is directly used: |
|
min |
|
θmLm |
|
3=N/summationdisplay |
|
i=1/parenleftBig |
|
/bardblgy |
|
i−gm |
|
i/bardbl2/parenrightBig |
|
. (14) |
|
By merging Eqs. (12), (13) and (14) together, we obtain |
|
the loss function of imgNet (or txtNet), formulated as the |
|
following minimization problem: |
|
min |
|
θmLm=Lm |
|
1+βLm |
|
2+γLm |
|
3, (15) |
|
whereβandγare hyper-parameters that are used to adjust |
|
the weights of loss items. |
|
Optimization |
|
The optimization of our DSIBH includes two parts: learning |
|
the modality-invariant hash code Gyand learning the hash |
|
codes G1andG2for X-ray images and radiology reports |
|
respectively. Learning Gyequals to optimize θy+. For hash |
|
codes of modality m,θmneeds to be optimized. The whole |
|
optimization procedure is summarized in Algorithm 1. |
|
Forθyof labNet, Eq. (10) is derivable. Therefore, Back- |
|
propagation algorithm (BP) with mini-batch stochastic gra- |
|
dient descent (mini-batch SGD) method is applied to update |
|
it. As for gy |
|
l, we use Eq. (16) to update: |
|
gy |
|
l=sign (fy(θy;yl)). (16) |
|
For imgNet and txtNet, we also use the BP with mini- |
|
batch SGD method to update θ1andθ2. |
|
Once Algorithm 1 converges, the well-trained imgNet and |
|
txtNet with sign(·)are used to handle out-of-sample data |
|
points from modality m: |
|
gm |
|
i=sign (fm(θm;xm |
|
i)). (17) |
|
Experiments |
|
In this section, we first introduce the dataset used for assess- |
|
ment and specify the experimental setting. Following this, |
|
we demonstrate that the proposed DSIBH can achieve the |
|
state-of-the-art performance on CMR-based CAD.Table 1: Comparison with baselines in terms of MAP on CMR-based CAD. The best results are marked with bold . |
|
MethodX→R R →X |
|
16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits |
|
CCA (Hotelling 1992) 0.3468 0.3354 0.3273 0.3215 0.3483 0.3368 0.3288 0.3230 |
|
CMSSH (Bronstein et al. 2010) 0.4224 0.4020 0.3935 0.3896 0.3899 0.3967 0.3646 0.3643 |
|
SCM (Zhang and Li 2014) 0.4581 0.4648 0.4675 0.4684 0.4516 0.4574 0.4604 0.4611 |
|
STMH (Wang et al. 2015) 0.3623 0.3927 0.4211 0.4387 0.3980 0.4183 0.4392 0.4453 |
|
CMFH (Ding et al. 2016) 0.3649 0.3673 0.3736 0.3760 0.4130 0.4156 0.4303 0.4309 |
|
SePH (Lin et al. 2016) 0.4684 0.4776 0.4844 0.4903 0.4475 0.4555 0.4601 0.4658 |
|
DCMH (Jiang and Li 2017) 0.4834 0.4878 0.4885 0.4839 0.4366 0.4513 0.4561 0.4830 |
|
SSAH (Li et al. 2018) 0.4894 0.4999 0.4787 0.4624 0.4688 0.4806 0.4832 0.4833 |
|
EGDH (Shi et al. 2019) 0.4821 0.5010 0.4996 0.5096 0.4821 0.4943 0.4982 0.5041 |
|
DSIBH 0.5001 0.5018 0.5116 0.5172 0.4898 0.4994 0.4997 0.5084 |
|
Query X-ray images Retrieved radiology reports via our DSIBH |
|
Query radiology reports Retrieved X -ray images via our DSIBH(a) |
|
(b)Theendotracheal tube tipis6cmabove |
|
the carina .Nasogastric tube tipis |
|
beyond theGEjunction andofftheedge |
|
ofthefilm.Aleftcentral lineispresent in |
|
thetipisinthemidSVC.Apacemaker is |
|
noted ontheright inthelead projects |
|
over the right ventricle .There is |
|
probable scarring inboth lung apices . |
|
There arenonew areas ofconsolidation . |
|
There isupper zone redistribution and |
|
cardiomegaly suggesting pulmonary |
|
venous hypertension .There isno |
|
pneumothorax . |
|
There iscardiomegaly .Apacemaker is |
|
present with thelead overlying theright |
|
ventricle .Anapparent Swan -Ganz |
|
catheter ispresent, with tipoverlying the |
|
distal right pulmonary artery .There is |
|
upper zone re-distribution, with mild |
|
vascular plethora, butnoovert CHF.No |
|
focal infiltrate isdetected .Possible trace |
|
fluid attheleftcostophrenic angle .Ascompared totheprevious radiograph, |
|
thepatient hasreceived aright -sided |
|
chest tube .The tube isincorrect |
|
position, the right lung isnow fully |
|
expanded, there isnoevidence ofright |
|
pneumothorax .Incomparison with thestudy of___,the |
|
patient has taken abetter inspiration . |
|
Continued enlargement ofthecardiac |
|
silhouette with minimal central vascular |
|
congestion .Right PICC lineisstable .No |
|
evidence ofacute focal pneumonia . |
|
Since prior radiograph from ___, the |
|
mediastinal drain tube hasbeen removed . |
|
There isnopneumothorax .Both lung |
|
volumes arevery low.Bilateral, right side |
|
more than left side, moderate |
|
pulmonary edema has improved . |
|
Widened cardiomediastinal silhouette is |
|
more than itwas on___;however, this |
|
appearance could beexacerbation from |
|
lowlung volumes .Patient isstatus post |
|
median sternotomy with intact sternal |
|
sutures .Atelectasis; Pleural Effusion; |
|
PneumothoraxAtelectasis; Pleural Effusion; Support |
|
DevicesAtelectasis; CardiomegalyAtelectasis; Cardiomegaly; Pleural |
|
Effusion; Support DevicesAtelectasis; Cardiomegaly |
|
Edema; Enlarged Cardiomediastinum ; |
|
Support DevicesEdema; Lung Opacity; Pleural Effusion; |
|
Pneumonia; Support DevicesCardiomegaly; Pleural Effusion; |
|
Support DevicesAtelectasis; Pleural Effusion; Support |
|
DevicesCardiomegaly; Edema; Pleural |
|
Effusion; Support Devices |
|
Figure 1: The top 4 profiles retrieved by our DSIBH on the MIMIC-CXR dataset with 128 bits. |
|
Experimental setting |
|
The large-scale chest X-ray and radiology report dataset |
|
MIMIC-CXR (Johnson et al. 2019) is used to evaluate the |
|
performance of DSIBH. Some statistics of this dataset are |
|
introduced as follows. |
|
MIMIC-CXR1consists of chest X-ray images and radi- |
|
ology reports sourced from the Beth Israel Deaconess Med- |
|
ical Center between 2011-2016. Each radiology report is as- |
|
sociated with at least one X-ray image and annotated with |
|
a 14-dimensional label indicating the existence of pathol- |
|
ogy or lack of pathology. To evaluate the performance of |
|
CMR-based CAD, we adopt 73876 image-report pairs for |
|
assessment. During the comparison process, radiology re- |
|
ports are represented as bag-of-word vectors according to |
|
the top 617 most-frequent words. In the testing phase, we |
|
randomly sample 762 image-report pairs as query set and |
|
regard the rest as retrieval set. In the training phase, 14000 |
|
pairs from the retrieval set are used as training set. |
|
The proposed DSIBH is compared with nine state-of- |
|
the-arts in hashing-based CMR including CCA (Hotelling |
|
1992), CMSSH (Bronstein et al. 2010), SCM (Zhang and |
|
Li 2014), STMH (Wang et al. 2015), CMFH (Ding et al. |
|
1https://physionet.org/content/mimic-cxr/2.0.0/2016), SePH (Lin et al. 2016), DCMH (Jiang and Li 2017), |
|
SSAH (Li et al. 2018), and EGDH (Shi et al. 2019). CCA, |
|
STMH and CMFH are unsupervised approaches that depend |
|
on data distributions, whereas the other six are supervised |
|
methods that take semantic labels into account. For fair com- |
|
parison with shallow-structure-based baselines, we use the |
|
trainset of MIMIC-CXR to optimize a CNN-F network for |
|
classification and extract 4096-dimensional features to rep- |
|
resent X-ray images. We set η= 1,β= 0.1andγ= 1 |
|
for MIMIC-CXR as hyper-parameters. In the optimization |
|
phase, the batch size is set as 128 and three Adam solvers |
|
with different learning rates are applied (i.e., 10−3for lab- |
|
Net,10−4.5for imgNet and 10−3.5for txtNet). |
|
Mean average precision (MAP) is adopted to evaluate the |
|
performance of hashing-based CMR methods. MAP is the |
|
most widely used criteria metric to measure retrieval accu- |
|
racy, which is computed as follows: |
|
MAP =1 |
|
|Q||Q|/summationdisplay |
|
i=11 |
|
rqiR/summationdisplay |
|
j=1Pqi(j)δqi(j), (18) |
|
where|Q|indicates the number of query set, rqirepresents |
|
the number of correlated instances of query qiin database |
|
set,Ris the retrieval radius, Pqi(j)denotes the precision ofAlgorithm 1: The Optimization Procedure of DSIBH |
|
Input : X-ray images X1, radiology reports X2, semantic |
|
labels Y, learning rates λy,λ1,λ2, and iteration numbers |
|
Ty,T1,T2. |
|
Output : Parameters θ1andθ2of imgNet and |
|
txtNet. |
|
1:Randomly initialize θy,θ1,θ2andGy. |
|
2:repeat |
|
3: foriter=1 toTydo |
|
4: Updateθyby BP algorithm: |
|
θy←θy−λy·∇θyLy |
|
5: Update Gyby Eq. (16) |
|
6: end for |
|
7: foriter=1 toT1do |
|
8: Updateθ1by BP algorithm: |
|
θ1←θ1−λ1·∇θ1L1 |
|
9: end for |
|
10: foriter=1 toT2do |
|
11: Updateθ2by BP algorithm: |
|
θ2←θ2−λ2·∇θ2L2 |
|
12: end for |
|
13:until Convergence |
|
the topjretrieved sample and δqi(j)indicates whether the |
|
jthreturned sample is correlated with the ithquery entity. To |
|
reflect the overall property of rankings, the size of database |
|
set is used as the retrieval radius. |
|
The efficacy of DSIBH in CMR-based CAD |
|
CMR-based CAD stresses on two retrieval directions: using |
|
X-ray images to retrieve radiology reports ( X→R) and |
|
using radiology reports to retrieve X-ray images ( R→X). |
|
In experiments, we set bit length as 16, 32, 64 and 128 bits. |
|
Table 1 reports the MAP results on the MIMIC-CXR |
|
dataset. As can be seen, unsupervised methods fail to pro- |
|
vide reasonable retrieval results due to the neglect of se- |
|
mantic information. CCA performs the worst among these |
|
unsupervised methods due to the naive management of data |
|
distribution. Compared with CCA, STMH and CMFH can |
|
achieve a better retrieval accuracy, which we argue can |
|
be attributed to the coverage of data correlation. By con- |
|
trast, shallow-structure-based supervised methods includ- |
|
ing CMSSH, SCM, and SePH achieve a large performance |
|
gain over unsupervised methods by further considering se- |
|
mantic information to express semantic similarity with hash |
|
codes. Benefiting from the effect of nonlinear fitting abil- |
|
ity and self-adjusting feature extraction ability, deep super- |
|
vised methods including DCMH, SSAH and EGDH out- |
|
perform the six shallow methods in the mass. Due to the |
|
extra consideration of superfluous information reduction, |
|
our DSIBH can achieve the best accuracy. Specifically, |
|
compared with the recently proposed deep hashing method |
|
EGDH by MAP, our DSIBH achieves average absolute in- |
|
creases of 0.96%/0.47% on the MIMIC-CXR dataset. |
|
Meanwhile, we also visualize the top 4 retrieved medical |
|
profiles of our DSIBH on X→RandR→Xdirections |
|
using the MIMIC-CXR dataset in Figure 1. These resultsconfirm our concern that DSIBH can retrieve pathology- |
|
related heterogeneous medical data again. |
|
Conclusion |
|
In this paper, to preform computer-aided diagnosis (CAD) |
|
based on the large-scale multi-modal medical data, the |
|
cross-modal retrieval (CMR) technique based on semantic |
|
hashing is introduced. Inspired by Deep Deterministic In- |
|
formation Bottleneck, a novel method named Deep Super- |
|
vised Information Bottleneck Hashing (DSIBH) is designed |
|
to perform CMR-based CAD. Experiments are conducted |
|
on the large-scale medical dataset MIMIC-CXR. Compared |
|
with other state-of-the arts, our DSIBH can reduce the dis- |
|
traction of superfluous information, which thus strengthens |
|
the discriminability of hash codes in CMR-based CAD. |
|
Acknowledgements |
|
This work is partially supported by NSFC (62101179, |
|
61772220), Key R&D Plan of Hubei |
|
Province (2020BAB027) and Project of Hubei Univer- |
|
sity School (202011903000002). |
|
References |
|
Bronstein, M. M.; Bronstein, A. M.; Michel, F.; and Para- |
|
gios, N. 2010. Data fusion through cross-modality metric |
|
learning using similarity-sensitive hashing. In 2010 IEEE |
|
computer society conference on computer vision and pattern |
|
recognition , 3594–3601. IEEE. |
|
Cao, Y .; Long, M.; Wang, J.; and Liu, S. 2017. Collec- |
|
tive deep quantization for efficient cross-modal retrieval. In |
|
Thirty-First AAAI Conference on Artificial Intelligence . |
|
Cao, Y .; Long, M.; Wang, J.; and Zhu, H. 2016. Correlation |
|
autoencoder hashing for supervised cross-modal search. In |
|
Proceedings of the 2016 ACM on International Conference |
|
on Multimedia Retrieval , 197–204. ACM. |
|
Chatfield, K.; Simonyan, K.; Vedaldi, A.; and Zisserman, A. |
|
2014. Return of the devil in the details: Delving deep into |
|
convolutional nets. arXiv preprint arXiv:1405.3531 . |
|
de La Torre, J.; Valls, A.; and Puig, D. 2020. A deep learn- |
|
ing interpretable classifier for diabetic retinopathy disease |
|
grading. Neurocomputing , 396: 465–476. |
|
Ding, G.; Guo, Y .; Zhou, J.; and Gao, Y . 2016. Large- |
|
scale cross-modality search via collective matrix factoriza- |
|
tion hashing. IEEE Transactions on Image Processing , |
|
25(11): 5427–5440. |
|
Erfankhah, H.; Yazdi, M.; Babaie, M.; and Tizhoosh, H. R. |
|
2019. Heterogeneity-aware local binary patterns for retrieval |
|
of histopathology images. IEEE Access , 7: 18354–18367. |
|
Fang, J.; Fu, H.; and Liu, J. 2021. Deep triplet hashing net- |
|
work for case-based medical image retrieval. Medical Image |
|
Analysis , 69: 101981. |
|
Hotelling, H. 1992. Relations between two sets of variates. |
|
InBreakthroughs in statistics , 162–190. Springer. |
|
Hu, H.; Xie, L.; Hong, R.; and Tian, Q. 2020. Creating |
|
Something From Nothing: Unsupervised Knowledge Distil- |
|
lation for Cross-Modal Hashing. In 2020 IEEE/CVF Confer- |
|
ence on Computer Vision and Pattern Recognition (CVPR) .Hu, Z.; Liu, X.; Wang, X.; Cheung, Y .-m.; Wang, N.; and |
|
Chen, Y . 2019. Triplet Fusion Network Hashing for Un- |
|
paired Cross-Modal Retrieval. In Proceedings of the 2019 |
|
on International Conference on Multimedia Retrieval , 141– |
|
149. |
|
In´es, A.; Dom ´ınguez, C.; Heras, J.; Mata, E.; and Pascual, V . |
|
2021. Biomedical image classification made easier thanks to |
|
transfer and semi-supervised learning. Computer Methods |
|
and Programs in Biomedicine , 198: 105782. |
|
Jiang, Q.-Y .; and Li, W.-J. 2017. Deep cross-modal hashing. |
|
InProceedings of the IEEE conference on computer vision |
|
and pattern recognition , 3232–3240. |
|
Johnson, A. E.; Pollard, T. J.; Greenbaum, N. R.; Lungren, |
|
M. P.; Deng, C.-y.; Peng, Y .; Lu, Z.; Mark, R. G.; Berkowitz, |
|
S. J.; and Horng, S. 2019. MIMIC-CXR-JPG, a large pub- |
|
licly available database of labeled chest radiographs. arXiv |
|
preprint arXiv:1901.07042 . |
|
Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im- |
|
agenet classification with deep convolutional neural net- |
|
works. In Advances in neural information processing sys- |
|
tems, 1097–1105. |
|
Li, C.; Deng, C.; Li, N.; Liu, W.; Gao, X.; and Tao, D. |
|
2018. Self-supervised adversarial hashing networks for |
|
cross-modal retrieval. In Proceedings of the IEEE con- |
|
ference on computer vision and pattern recognition , 4242– |
|
4251. |
|
Lin, Z.; Ding, G.; Han, J.; and Wang, J. 2016. Cross-view re- |
|
trieval via probability-based semantics-preserving hashing. |
|
IEEE transactions on cybernetics , 47(12): 4342–4355. |
|
Liu, S.; Qian, S.; Guan, Y .; Zhan, J.; and Ying, L. |
|
2020. Joint-modal Distribution-based Similarity Hashing |
|
for Large-scale Unsupervised Deep Cross-modal Retrieval. |
|
InProceedings of the International ACM SIGIR Conference |
|
on Research and Development in Information Retrieval , |
|
1379–1388. |
|
Shen, F.; Gao, X.; Liu, L.; Yang, Y .; and Shen, H. T. 2017. |
|
Deep asymmetric pairwise hashing. In Proceedings of the |
|
ACM international conference on Multimedia , 1522–1530. |
|
Shi, X.; Su, H.; Xing, F.; Liang, Y .; Qu, G.; and |
|
Yang, L. 2020. Graph temporal ensembling based semi- |
|
supervised convolutional neural network with noisy labels |
|
for histopathology image analysis. Medical Image Analysis , |
|
60: 101624. |
|
Shi, Y .; You, X.; Zheng, F.; Wang, S.; and Peng, Q. 2019. |
|
Equally-guided discriminative hashing for cross-modal re- |
|
trieval. In Proceedings of the 28th International Joint Con- |
|
ference on Artificial Intelligence , 4767–4773. |
|
Song, G.; Tan, X.; Zhao, J.; and Yang, M. 2021. Deep Ro- |
|
bust Multilevel Semantic Hashing for Multi-Label Cross- |
|
Modal Retrieval. Pattern Recognition , 108084. |
|
Tishby, N.; Pereira, F. C.; and Bialek, W. 1999. The infor- |
|
mation bottleneck method. 368–377. |
|
Wang, D.; Gao, X.; Wang, X.; and He, L. 2015. Seman- |
|
tic topic multimodal hashing for cross-media retrieval. In |
|
Twenty-Fourth International Joint Conference on Artificial |
|
Intelligence .Wang, J.; Zhang, T.; Sebe, N.; Shen, H. T.; et al. 2017. A |
|
survey on learning to hash. IEEE transactions on pattern |
|
analysis and machine intelligence , 40(4): 769–790. |
|
Wang, K.; Yin, Q.; Wang, W.; Wu, S.; and Wang, L. 2016. |
|
A comprehensive survey on cross-modal retrieval. arXiv |
|
preprint arXiv:1607.06215 . |
|
Wang, L.; Zhu, L.; Yu, E.; Sun, J.; and Zhang, H. 2019. |
|
Fusion-supervised deep cross-modal hashing. In IEEE Inter- |
|
national Conference on Multimedia and Expo , 37–42. IEEE. |
|
Xie, D.; Deng, C.; Li, C.; Liu, X.; and Tao, D. 2020. |
|
Multi-Task Consistency-Preserving Adversarial Hashing for |
|
Cross-Modal Retrieval. IEEE Transactions on Image Pro- |
|
cessing , 29: 3626–3637. |
|
Yang, E.; Yao, D.; Cao, B.; Guan, H.; Yap, P.-T.; Shen, D.; |
|
and Liu, M. 2020. Deep disentangled hashing with momen- |
|
tum triplets for neuroimage search. In International Confer- |
|
ence on Medical Image Computing and Computer-Assisted |
|
Intervention , 191–201. Springer. |
|
Yao, H.-L.; Zhan, Y .-W.; Chen, Z.-D.; Luo, X.; and Xu, |
|
X.-S. 2021. TEACH: Attention-Aware Deep Cross-Modal |
|
Hashing. In Proceedings of the 2021 International Confer- |
|
ence on Multimedia Retrieval , ICMR ’21, 376–384. New |
|
York, NY , USA: Association for Computing Machinery. |
|
ISBN 9781450384636. |
|
Yu, J.; Zhou, H.; Zhan, Y .; and Tao, D. 2021. Deep Graph- |
|
neighbor Coherence Preserving Network for Unsupervised |
|
Cross-modal Hashing. In Proceedings of the AAAI Confer- |
|
ence on Artificial Intelligence , volume 35, 4626–4634. |
|
Yu, S.; Giraldo, L. G. S.; Jenssen, R.; and Principe, J. C. |
|
2019. Multivariate Extension of Matrix-Based R ´enyi’s\ |
|
α-Order Entropy Functional. IEEE transactions on pattern |
|
analysis and machine intelligence , 42(11): 2960–2966. |
|
Yu, X.; Yu, S.; and Pr ´ıncipe, J. C. 2021. Deep Deterministic |
|
Information Bottleneck with Matrix-Based Entropy Func- |
|
tional. In ICASSP 2021-2021 IEEE International Confer- |
|
ence on Acoustics, Speech and Signal Processing (ICASSP) , |
|
3160–3164. IEEE. |
|
Zhan, Y .-W.; Luo, X.; Wang, Y .; and Xu, X.-S. 2020. Su- |
|
pervised Hierarchical Deep Hashing for Cross-Modal Re- |
|
trieval. In Proceedings of the 28th ACM International Con- |
|
ference on Multimedia , MM ’20, 3386–3394. New York, |
|
NY , USA: Association for Computing Machinery. ISBN |
|
9781450379885. |
|
Zhang, D.; and Li, W.-J. 2014. Large-scale supervised mul- |
|
timodal hashing with semantic correlation maximization. In |
|
Twenty-Eighth AAAI Conference on Artificial Intelligence . |
|
Zhang, J.; Xie, Y .; Xia, Y .; and Shen, C. 2019. Attention |
|
residual learning for skin lesion classification. IEEE trans- |
|
actions on medical imaging , 38(9): 2092–2103. |
|
Zhen, L.; Hu, P.; Wang, X.; and Peng, D. 2020. Deep Super- |
|
vised Cross-Modal Retrieval. In 2019 IEEE/CVF Confer- |
|
ence on Computer Vision and Pattern Recognition (CVPR) . |
|
Zhu, L.; Lu, X.; Cheng, Z.; Li, J.; and Zhang, H. 2020. Flex- |
|
ible multi-modal hashing for scalable multimedia retrieval. |
|
ACM Transactions on Intelligent Systems and Technology |
|
(TIST) , 11(2): 1–20. |