File size: 13,767 Bytes
55ebd3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
[
    [
        "2410.00001v1_figure_1.png",
        "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/AugementedRealityFlow.png",
        "Figure 1: iSurgARy general workflow. (From left to right): (1) The ‘acquire’ button sets a reference landmark and displays a red sphere to visualize its placement in AR. Above, we display all landmarks from a head top point of view (2) The system displays the 3D model of the ventricles in AR with the registration’s RMSE. The process of selecting landmarks more accurately can be repeated until the user is satisfied with the RMSE. (3) The entry point placement feature enables the user to choose the catheter entry point on the surface of the head. We display the top view of Kocher’s point, a common entry point for ventriculostomy. (4) The catheter tracking feature tracks a QR code image and renders the tip of the catheter in AR. The insertion of the catheter through the entry point from a head top point of view is shown.",
        "Chart"
    ],
    [
        "2410.00001v1_figure_2.png",
        "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/clinician.png",
        "Figure 2: Left: Neurosurgeon performing registration by selecting the left tragus landmark. Right: Visualization of internal ventricular structure in AR and simulation of catheter insertion.",
        "Chart"
    ],
    [
        "2410.00001v1_figure_3.png",
        "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/nurse.jpg",
        "Figure 3: Participant uses the aim cursor positioned at the center of the screen to select landmarks.",
        "Chart"
    ],
    [
        "2410.00001v1_figure_4.png",
        "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/nasa.png",
        "Figure 4: NASA TLX Scores for each dimension of workload.",
        "Chart"
    ],
    [
        "2410.00001v1_figure_5.png",
        "http://arxiv.org/html/2410.00001v1/extracted/5841610/Figures/IPhone_holder.png",
        "Figure 5: After feedback from clinicians we began using a mobile phone holder to improve ergonomic issues.",
        "Chart"
    ],
    [
        "2410.00003v2_figure_1.png",
        "http://arxiv.org/html/2410.00003v2/x1.png",
        "Figure 1. Semantic interpretations for HAR",
        "Chart"
    ],
    [
        "2410.00003v2_figure_2.png",
        "http://arxiv.org/html/2410.00003v2/x2.png",
        "Figure 2. Example of semantic interpretations of sensor readings and activity labels",
        "Chart"
    ],
    [
        "2410.00003v2_figure_3(a).png",
        "http://arxiv.org/html/2410.00003v2/x3.png",
        "(a) Raw data\nFigure 3. Data distribution under three settings across two datasets",
        "Chart"
    ],
    [
        "2410.00003v2_figure_3(b).png",
        "http://arxiv.org/html/2410.00003v2/x4.png",
        "(b) Self-supervised learning\nFigure 3. Data distribution under three settings across two datasets",
        "Chart"
    ],
    [
        "2410.00003v2_figure_3(c).png",
        "http://arxiv.org/html/2410.00003v2/x5.png",
        "(c) Semantic interpretations\nFigure 3. Data distribution under three settings across two datasets",
        "Chart"
    ],
    [
        "2410.00003v2_figure_4.png",
        "http://arxiv.org/html/2410.00003v2/x6.png",
        "Figure 4. Semantic interpretations for new\nactivities",
        "Chart"
    ],
    [
        "2410.00003v2_figure_5.png",
        "http://arxiv.org/html/2410.00003v2/x7.png",
        "Figure 5. Overview of LanHAR. (1) Utilize LLMs for generating semantic interpretations of sensor reading and activity labels. (2) Train a text encoder to encode two types of semantic interpretations and achieve their alignment. H⁢i𝐻𝑖H{i}italic_H italic_i and Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denote embeddings of semantic interpretations of activity labels and sensor reading (3) Train a sensor encoder to align sensor reading and semantic interpretation. E⁢i𝐸𝑖E{i}italic_E italic_i denote embeddings of sensor reading (4) For inference, only use the sensor encoder to generate embeddings for the sensor readings E⁢i𝐸𝑖E{i}italic_E italic_i and then compute the similarity with the pre-stored embeddings of the activity labels H⁢i𝐻𝑖H{i}italic_H italic_i to obtain the human activity recognition results.",
        "Chart"
    ],
    [
        "2410.00003v2_figure_6.png",
        "http://arxiv.org/html/2410.00003v2/x8.png",
        "Figure 6. An example prompt for obtaining semantic interpretation of sensor readings",
        "Chart"
    ],
    [
        "2410.00003v2_figure_7.png",
        "http://arxiv.org/html/2410.00003v2/x9.png",
        "Figure 7. An example prompt for obtaining semantic interpretation of activity labels",
        "Chart"
    ],
    [
        "2410.00003v2_figure_8.png",
        "http://arxiv.org/html/2410.00003v2/x10.png",
        "Figure 8. The iterative re-generatio method to ensuring the quality of LLM responses. (1) Filter inaccurate semantic interpretations. (2) Regenerate new semantic interpretations by LLMs. (3) Incorporate the new semantic interpretation based on whether it can reduce the overall KL divergence.",
        "Chart"
    ],
    [
        "2410.00003v2_figure_9.png",
        "http://arxiv.org/html/2410.00003v2/x11.png",
        "Figure 9. Category-level HAR Performance\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_10.png",
        "http://arxiv.org/html/2410.00003v2/x12.png",
        "Figure 10. The Effect of Text Encoder\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_11.png",
        "http://arxiv.org/html/2410.00003v2/x13.png",
        "Figure 11. The impact of different pre-trained LM\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_12.png",
        "http://arxiv.org/html/2410.00003v2/x14.png",
        "Figure 12. The impact of different LLMs\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_13.png",
        "http://arxiv.org/html/2410.00003v2/x15.png",
        "Figure 13. KL divergence with and without the iterative re-generation method\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_14.png",
        "http://arxiv.org/html/2410.00003v2/x16.png",
        "Figure 14. The impact of different quality of LLMs’ response\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_15.png",
        "http://arxiv.org/html/2410.00003v2/x17.png",
        "Figure 15. The effect of different number of attention heads\n",
        "Chart"
    ],
    [
        "2410.00003v2_figure_16.png",
        "http://arxiv.org/html/2410.00003v2/x18.png",
        "Figure 16. The effect of different number of encoding layers\n",
        "Chart"
    ],
    [
        "2410.00005v1_figure_1.png",
        "http://arxiv.org/html/2410.00005v1/x1.png",
        "Figure 1. Illustration of Task 1 framework.",
        "Chart"
    ],
    [
        "2410.00005v1_figure_2.png",
        "http://arxiv.org/html/2410.00005v1/x2.png",
        "Figure 2. Illustration of Task #2 and #3 framework.",
        "Chart"
    ],
    [
        "2410.00005v1_figure_3.png",
        "http://arxiv.org/html/2410.00005v1/x3.png",
        "Figure 3. Detail Evaluation of Team db3 Solution",
        "Chart"
    ],
    [
        "2410.00006v1_figure_1.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image1.png",
        "Figure 1. Typical architecture of a conversational interface, adapted from\n(McTear, 2018; Rough and Cowan, 2020)",
        "Chart"
    ],
    [
        "2410.00006v1_figure_2.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image2.png",
        "Figure 2. Architecture of a conversational user interface based on Rasa",
        "Chart"
    ],
    [
        "2410.00006v1_figure_3.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image3.png",
        "Figure 3. Sample dialog, using the chatroom of (scalableminds, 2020)",
        "Chart"
    ],
    [
        "2410.00006v1_figure_4.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image4.png",
        "Figure 4. Node-RED Flow editor with an example flow implementing an action server, i.e., fulfillment component,\nfor a Rasa conversational user interface.\nTo the left, the network section of the Node-RED palette is partially visible.",
        "Chart"
    ],
    [
        "2410.00006v1_figure_5.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image5.png",
        "Figure 5. Example configuration of a switch node in Node-RED",
        "Chart"
    ],
    [
        "2410.00006v1_figure_6.png",
        "http://arxiv.org/html/2410.00006v1/extracted/5851436/image6.png",
        "Figure 6. Example configuration of a sendbuttons node in Node-RED",
        "Chart"
    ],
    [
        "2410.00010v1_figure_1.png",
        "http://arxiv.org/html/2410.00010v1/x1.png",
        "Figure 1: PHemoNet. Fully hypercomplex architecture with encoders operating in different hypercomplex domains according to n𝑛nitalic_n and a refined hypercomplex fusion module with nfusion=4subscript𝑛fusion4n_{\\text{fusion}}=4italic_n start_POSTSUBSCRIPT fusion end_POSTSUBSCRIPT = 4. Finally, a classification layer yileds the final output for valence/arousal prefiction.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_1.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/BlockDiagram.png",
        "Figure 1: Block diagram of the actor-critic based EEG diffusion system.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_2.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Forward.png",
        "Figure 2: The forward diffusion process of EEG diffusion model.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_3.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Reverse.png",
        "Figure 3: The backward diffusion process of EEG diffusion model.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_4.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/U_Net.png",
        "Figure 4: The architecture of EEG-U-Net.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_5.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/StepBlock.png",
        "Figure 5: The step blocks of EEG-U-Net.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_6(a).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ConBlock.png",
        "(a)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_6(b).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/TransCon.png",
        "(b)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_6(c).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ResBlock.png",
        "(c)\nFigure 6: The architecture of important network blocks in EEG-U-Net. (a) Convolution block. (b) Transposed convolution block. (c) Residual block.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_7(a).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/CWTNet.png",
        "(a)\nFigure 7: The architecture of continuous wavelet transform based convolutional networks and classification based convolutional networks. (a) Wavelet network. (b) Classification network.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_7(b).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ClassNet.png",
        "(b)\nFigure 7: The architecture of continuous wavelet transform based convolutional networks and classification based convolutional networks. (a) Wavelet network. (b) Classification network.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_8(a).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/ActorNet.png",
        "(a)\nFigure 8: The architecture of two networks using in weight-guided agent. (a) Actor network. (b) Critic network.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_8(b).png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/CriticNet.png",
        "(b)\nFigure 8: The architecture of two networks using in weight-guided agent. (a) Actor network. (b) Critic network.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_9.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/reward.png",
        "Figure 9: The design process of a reward mechanism for the EEG generation system.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_10.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Channel.png",
        "Figure 10: Electrode montage for In-house EEG Datase.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_11.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Device.png",
        "Figure 11: Equipments utilized for data collection.",
        "Chart"
    ],
    [
        "2410.00013v1_figure_12.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/Experiment.png",
        "Figure 12: Timing scheme of one session (top), Timing scheme of the paradigm (bottom).",
        "Chart"
    ],
    [
        "2410.00013v1_figure_13.png",
        "http://arxiv.org/html/2410.00013v1/extracted/5854650/Images/RealEEG.png",
        "Figure 13: Real EEG signals drawn from BCI competition 4-2a dataset.",
        "Chart"
    ]
]