-
Notifications
You must be signed in to change notification settings - Fork 5
/
index.html
398 lines (383 loc) · 29.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<meta name="description" content="Home page of REMEX">
<meta name="author" content="WeiQM">
<link rel="icon" href="images/logo/RMX_16.ico">
<title>REMEX - Remote sensing + Medical imaging + X-features</title>
<!-- Bootstrap core CSS -->
<link rel="stylesheet" href="style/bootstrap.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css">
<!-- Custom styles for this template -->
<link href="style/jquery.bxslider.css" rel="stylesheet">
<link href="style/style.css" rel="stylesheet">
</head>
<body>
<!-- Navigation -->
<nav class="navbar navbar-inverse navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
</div>
<div id="navbar" class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="index.html">Home</a></li>
<li><a href="people.html">People</a></li>
<li><a href="research.html">Research</a></li>
<li><a href="publications.html">Publications</a></li>
<li><a href="downloads.html">Downloads</a></li>
<li><a href="contact.html">Contact Us</a></li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li class="active"><a href="index.html">English</a></li>
<li><a href="html/cn/index.html">中文</a></li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="index.html"><img src="images/logo/logo_w.png" alt="Logo" width="80px"/></a></li>
</ul>
</div>
</div>
</nav>
<div class="container">
<header>
<!--
<a href="index.html"><img src="images/logo.png" width="256px"></a>
-->
</header>
<!--
<section class="main-slider">
<ul class="bxslider">
<li><div class="slider-item"><img src="images/logo/logo_c.png" title="Logo" /><h2><a href="" title="Loge">New published !</a></h2></div></li>
<li><div class="slider-item"><img src="images/logo/logo_m.png" title="Logo" /><h2><a href="" title="Loge">New published !</a></h2></div></li>
<li><div class="slider-item"><img src="images/logo/logo_y.png" title="Logo" /><h2><a href="" title="Loge">New published !</a></h2></div></li>
<li><div class="slider-item"><img src="images/logo/logo_k.png" title="Logo" /><h3><a href="" title="Loge">New published !</a></h3></div></li>
<li><div class="slider-item"><img src="images/logo/logo_r.png" title="Logo" /><h3><a href="" title="Loge">New published !</a></h3></div></li>
<li><div class="slider-item"><img src="images/logo/logo_g.png" title="Logo" /><h3><a href="" title="Loge">New published !</a></h3></div></li>
<li><div class="slider-item"><img src="images/logo/logo_b.png" title="Logo" /><h3><a href="" title="Loge">New published !</a></h3></div></li>
</ul>
</section>
-->
<section>
<div class="row">
<!-- Main Page -->
<div class="col-md-8">
<introduce class="content-block">
<div class="block-body">
<img src="images/logo.png" alt="Logo" width="512px">
<p><br>
<b>REMEX</b> (<b>Re</b>mote sensing and <b>Me</b>dical imaging with <b>X</b>-features) is a research group directed by Prof. Zhiguo Jiang. The main research interest includes image processing, computer vision, pattern recognition, deep learning, and their applications on remote sensing, medical imaging.
</p>
<div class="block-image">
<img src="images/photo/Team2024.jpg" alt="Team photo">
</div>
<hr/>
<h3 align="middle"> <font color="FF0000">【New】</font><a href="html\cn\index.html">REMEX Lab summer Camp 暑期夏令营 2024 </a></h3>
<hr/>
<h3 align="left">Recently Published</h3><br />
<div class="block-text">
<table><tbody>
<tr> <!-- An Paper -->
<p>
<b>Addressing Sample Inconsistency for Semisupervised Object Detection in Remote Sensing Images</b> <a href="https://ieeexplore.ieee.org/document/10463140" target="_blank"><i class="fa fa-external-link"></i></a>
<br>
<font size="3pt" face="Georgia"><i>
Yuhao Wang, Lifan Yao, Gang Meng, Xinyue Zhang, Jiayun Song, <a href="https://haopzhang.github.io/" target="_blank">Haopeng Zhang*</a>
</i></font>
<br>
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (JSTARS), 2024
<br>
<!-- <i class="fa fa-file-pdf-o"></i> <a href="https://zhengyushan.github.io" target="_blank">PDF</a> -->
<i class="fa fa-bookmark-o"></i> <a href="javascript:toggleblock('ZhangJSTARS2024Abs')">Abstract</a>
<i class="fa fa-quote-left"></i> <a href="javascript:toggleblock('ZhangJSTARS2024Bib')">BibTeX</a>
<!-- <i class="fa fa-github"></i> <a href="https://github.com/hudingyi/FGCR" target="_blank">Code</a>-->
</p>
<p id="ZhangJSTARS2024Abs" class="abstract" style="display: none;">
The emergence of semisupervised object detection (SSOD) techniques has greatly enhanced object detection performance. SSOD leverages a limited amount of labeled data along with a large quantity of unlabeled data. However, there exists a problem of sample inconsistency in remote sensing images, which manifests in two ways. First, remote sensing images are diverse and complex. Conventional random initialization methods for labeled data are insufficient for training teacher networks to generate high-quality pseudolabels. Finally, remote sensing images typically exhibit a long-tailed distribution, where some categories have a significant number of instances, while others have very few. This distribution poses significant challenges during model training. In this article, we propose the utilization of SSOD networks for remote sensing images characterized by a long-tailed distribution. To address the issue of sample inconsistency between labeled and unlabeled data, we employ a labeled data iterative selection strategy based on the active learning approach. We iteratively filter out high-value samples through the designed selection criteria. The selected samples are labeled and used as data for supervised training. This method filters out valuable labeled data, thereby improving the quality of pseudolabels. Inspired by transfer learning, we decouple the model training into the training of the backbone and the detector. We tackle the problem of sample inconsistency in long-tail distribution data by training the detector using balanced data across categories. Our approach exhibits an approximate 1% improvement over the current state-of-the-art models on both the DOTAv1.0 and DIOR datasets.
</p>
<pre xml:space="preserve" id="ZhangJSTARS2024Bib" class="bibtex" style="display: none;">
@ARTICLE{10463140,
author={Wang, Yuhao and Yao, Lifan and Meng, Gang and Zhang, Xinye and Song, Jiayun and Zhang, Haopeng},
journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
title={Addressing Sample Inconsistency for Semisupervised Object Detection in Remote Sensing Images},
year={2024},
volume={17},
number={},
pages={6933-6944},
keywords={Training;Remote sensing;Object detection;Measurement;Detectors;Tail;Labeling;Active learning;long-tailed distribution;remote sensing;semisupervised object detection (SSOD)},
doi={10.1109/JSTARS.2024.3374820}
}
</pre>
<script language="javascript" type="text/javascript" xml:space="preserve">
hideblock('ZhangJSTARS2024Abs');
hideblock('ZhangJSTARS2024Bib');
</script>
</td>
</tr> <!-- Paper End Here -->
<tr> <!-- An Paper -->
<p>
<b>Histopathology language-image representation learning for fine-grained digital pathology cross-modal retrieval</b> <a href="https://www.sciencedirect.com/science/article/pii/S1361841524000884" target="_blank"><i class="fa fa-external-link"></i></a>
<br>
<font size="3pt" face="Georgia"><i>
Dingyi Hu, Zhiguo Jiang, Jun Shi, Fengying Xie, Kun Wu, Kunming Tang, Ming Cao, Jianguo Huai and <a href="https://zhengyushan.github.io/" target="_blank">Yushan Zheng *</a>
</i></font>
<br>
Medical Image Analysis, 2024
<br>
<!-- <i class="fa fa-file-pdf-o"></i> <a href="https://zhengyushan.github.io" target="_blank">PDF</a> -->
<i class="fa fa-bookmark-o"></i> <a href="javascript:toggleblock('ZhengMIA2024Abs')">Abstract</a>
<i class="fa fa-quote-left"></i> <a href="javascript:toggleblock('ZhengMIA2024Bib')">BibTeX</a>
<i class="fa fa-github"></i> <a href="https://github.com/hudingyi/FGCR" target="_blank">Code</a>
</p>
<p id="ZhengMIA2024Abs" class="abstract" style="display: none;">
Large-scale digital whole slide image (WSI) datasets analysis have gained significant attention in computer-aided cancer diagnosis. Content-based histopathological image retrieval (CBHIR) is a technique that searches a large database for data samples matching input objects in both details and semantics, offering relevant diagnostic information to pathologists. However, the current methods are limited by the difficulty of gigapixels, the variable size of WSIs, and the dependence on manual annotations. In this work, we propose a novel histopathology language-image representation learning framework for fine-grained digital pathology cross-modal retrieval, which utilizes paired diagnosis reports to learn fine-grained semantics from the WSI. An anchor-based WSI encoder is built to extract hierarchical region features and a prompt-based text encoder is introduced to learn fine-grained semantics from the diagnosis reports. The proposed framework is trained with a multivariate cross-modal loss function to learn semantic information from the diagnosis report at both the instance level and region level. After training, it can perform four types of retrieval tasks based on the multi-modal database to support diagnostic requirements. We conducted experiments on an in-house dataset and a public dataset to evaluate the proposed method. Extensive experiments have demonstrated the effectiveness of the proposed method and its advantages to the present histopathology retrieval methods. The code is available at https://github.com/hudingyi/FGCR.
</p>
<pre xml:space="preserve" id="ZhengMIA2024Bib" class="bibtex" style="display: none;">
@article{HU2024103163,
title = {Histopathology language-image representation learning for fine-grained digital pathology cross-modal retrieval},
journal = {Medical Image Analysis},
volume = {95},
pages = {103163},
year = {2024},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2024.103163},
url = {https://www.sciencedirect.com/science/article/pii/S1361841524000884},
author = {Dingyi Hu and Zhiguo Jiang and Jun Shi and Fengying Xie and Kun Wu and Kunming Tang and Ming Cao and Jianguo Huai and Yushan Zheng},
keywords = {CBHIR, Cross-modal, Diagnosis reports, Digital pathology},
abstract = {Large-scale digital whole slide image (WSI) datasets analysis have gained significant attention in computer-aided cancer diagnosis. Content-based histopathological image retrieval (CBHIR) is a technique that searches a large database for data samples matching input objects in both details and semantics, offering relevant diagnostic information to pathologists. However, the current methods are limited by the difficulty of gigapixels, the variable size of WSIs, and the dependence on manual annotations. In this work, we propose a novel histopathology language-image representation learning framework for fine-grained digital pathology cross-modal retrieval, which utilizes paired diagnosis reports to learn fine-grained semantics from the WSI. An anchor-based WSI encoder is built to extract hierarchical region features and a prompt-based text encoder is introduced to learn fine-grained semantics from the diagnosis reports. The proposed framework is trained with a multivariate cross-modal loss function to learn semantic information from the diagnosis report at both the instance level and region level. After training, it can perform four types of retrieval tasks based on the multi-modal database to support diagnostic requirements. We conducted experiments on an in-house dataset and a public dataset to evaluate the proposed method. Extensive experiments have demonstrated the effectiveness of the proposed method and its advantages to the present histopathology retrieval methods. The code is available at https://github.com/hudingyi/FGCR.}
}
</pre>
<script language="javascript" type="text/javascript" xml:space="preserve">
hideblock('ZhengMIA2024Abs');
hideblock('ZhengMIA2024Bib');
</script>
</td>
</tr> <!-- Paper End Here -->
<tr> <!-- An Paper -->
<p>
<b>Satellite Video Super-Resolution via Unidirectional Recurrent Network and Various Degradation Modeling</b> <a href="" target="_blank"><i class="fa fa-external-link"></i></a>
<br>
<font size="3pt" face="Georgia"><i>
Xiaoyuan Wei, <a href="https://haopzhang.github.io/" target="_blank">Haopeng Zhang*</a>, Zhiguo Jiang
</i></font>
<br>
IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2024
<br>
<!-- <i class="fa fa-file-pdf-o"></i> <a href="https://zhengyushan.github.io" target="_blank">PDF</a> -->
<i class="fa fa-bookmark-o"></i> <a href="javascript:toggleblock('ZhangIGARSS2024Abs')">Abstract</a>
<i class="fa fa-quote-left"></i> <a href="javascript:toggleblock('ZhangIGARSS2024Bib')">BibTeX</a>
<!-- <i class="fa fa-github"></i> <a href="" target="_blank">Code</a> -->
</p>
<p id="ZhangIGARSS2024Abs" class="abstract" style="display: none;">
Satellite video images contain temporal contextual information that is unavailable in single-frame images. Therefore, using a sequence of frames for super-resolution can significantly enhance the reconstruction effect. However, most existing satellite Video Super-Resolution (VSR) methods focus on improving the network’s presentation ability, overlooking the complex degradation processes present in real-world satellite videos which appear as a blind SR problem. In this paper, we propose an effective satellite VSR method based on a unidirectional recurrent network named URD-VSR. Simultaneously, a network independent of the SR structure is utilized to model the degradation process. Experiments on real satellite video datasets and integration with object detection demonstrate the effectiveness of the proposed method.
</p>
<pre xml:space="preserve" id="ZhangIGARSS2024Bib" class="bibtex" style="display: none;">
Coming Soon
</pre>
<script language="javascript" type="text/javascript" xml:space="preserve">
hideblock('ZhangIGARSS2024Abs');
hideblock('ZhangIGARSS2024Bib');
</script>
</td>
</tr> <!-- Paper End Here -->
<tr> <!-- An Paper -->
<p>
<b>A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World</b> <a href="https://www.mdpi.com/2072-4292/15/4/882" target="_blank"><i class="fa fa-external-link"></i></a>
<br>
<font size="3pt" face="Georgia"><i>
<a href="https://haopzhang.github.io/" target="_blank">Haopeng Zhang*</a>, Cong Zhang, Fengying Xie, Zhiguo Jiang
</i></font>
<br>
Remote Sensing, 2023
<br>
<!-- <i class="fa fa-file-pdf-o"></i> <a href="" target="_blank">PDF</a> -->
<i class="fa fa-bookmark-o"></i> <a href="javascript:toggleblock('ZhangRS2023Abs')">Abstract</a>
<i class="fa fa-quote-left"></i> <a href="javascript:toggleblock('ZhangRS2023Bib')">BibTeX</a>
<!-- <i class="fa fa-github"></i> <a href="" target="_blank">Code</a> -->
</p>
<p id="ZhangRS2023Abs" class="abstract" style="display: none;">
Single image super-resolution (SISR) is to reconstruct a high-resolution (HR) image from a corresponding low-resolution (LR) input. It is an effective way to solve the problem that infrared remote sensing images are usually suffering low resolution due to hardware limitations. Most previous learning-based SISR methods just use synthetic HR-LR image pairs (obtained by bicubic kernels) to learn the mapping from LR images to HR images. However, the underlying degradation in the real world is often different from the synthetic method, i.e., the real LR images are obtained through a more complex degradation kernel, which leads to the adaptation problem and poor SR performance. To handle this problem, we propose a novel closed-loop framework that can not only make full use of the learning ability of the channel attention module but also introduce the information of real images as much as possible through a closed-loop structure. Our network includes two independent generative networks for down-sampling and super-resolution, respectively, and they are connected to each other to get more information from real images. We make a comprehensive analysis of the training data, resolution level and imaging spectrum to validate the performance of our network for infrared remote sensing image super-resolution. Experiments on real infrared remote sensing images show that our method achieves superior performance in various training strategies of supervised learning, weakly supervised learning and unsupervised learning. Especially, our peak signal-to-noise ratio (PSNR) is 0.9 dB better than the second-best unsupervised super-resolution model on PROBA-V dataset.
</p>
<pre xml:space="preserve" id="ZhangRS2023Bib" class="bibtex" style="display: none;">
@Article{rs15040882,
AUTHOR = {Zhang, Haopeng and Zhang, Cong and Xie, Fengying and Jiang, Zhiguo},
TITLE = {A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World},
JOURNAL = {Remote Sensing},
VOLUME = {15},
YEAR = {2023},
NUMBER = {4},
ARTICLE-NUMBER = {882},
URL = {https://www.mdpi.com/2072-4292/15/4/882},
ISSN = {2072-4292},
DOI = {10.3390/rs15040882}
}
</pre>
<script language="javascript" type="text/javascript" xml:space="preserve">
hideblock('ZhangRS2023Abs');
hideblock('ZhangRS2023Bib');
</script>
</td>
</tr> <!-- Paper End Here -->
<tr> <!-- An Paper -->
<p>
<b>Kernel Attention Transformer for Histopathology Whole Slide Image Analysis and Assistant Cancer Diagnosis</b> <a href="https://ieeexplore.ieee.org/document/10093771" target="_blank"><i class="fa fa-external-link"></i></a>
<br>
<font size="3pt" face="Georgia"><i>
Yushan Zheng, Jun Li, Jun Shi, Fengying Xie, Jianguo Huai, Ming Cao and Zhiguo Jiang*
</i></font>
<br>
IEEE Transactions on Medical Imaging(TMI), 2023
<br>
<!-- <i class="fa fa-file-pdf-o"></i> <a href="https://openaccess.thecvf.com/content/CVPR2022W/PBVS/papers/Li_A_Two-Stage_Shake-Shake_Network_for_Long-Tailed_Recognition_of_SAR_Aerial_CVPRW_2022_paper.pdf" target="_blank">PDF</a>-->
<i class="fa fa-bookmark-o"></i> <a href="javascript:toggleblock('ZhengTMI2023Abs')">Abstract</a>
<i class="fa fa-quote-left"></i> <a href="javascript:toggleblock('ZhengTMI2023Bib')">BibTeX</a>
<!-- <i class="fa fa-github"></i> <a href="https://github.com/LinpengPan/PBVS2022-Multi-modal-AVOC-Challenge-Track1" target="_blank">Code</a>-->
</p>
<p id="ZhengTMI2023Abs" class="abstract" style="display: none;">
Transformer has been widely used in histopathology whole slide image analysis. However, the design of token-wise self-attention and positional embedding strategy in the common Transformer limits its effectiveness and efficiency when applied to gigapixel histopathology images. In this paper, we propose a novel kernel attention Transformer (KAT) for histopathology WSI analysis and assistant cancer diagnosis. The information transmission in KAT is achieved by cross-attention between the patch features and a set of kernels related to the spatial relationship of the patches on the whole slide images. Compared to the common Transformer structure, KAT can extract the hierarchical context information of the local regions of the WSI and provide diversified diagnosis information. Meanwhile, the kernel-based cross-attention paradigm significantly reduces the computational amount. The proposed method was evaluated on three large-scale datasets and was compared with 8 state-of-the-art methods. The experimental results have demonstrated the proposed KAT is effective and efficient in the task of histopathology WSI analysis and is superior to the state-of-the-art methods.
</p>
<pre xml:space="preserve" id="ZhengTMI2023Bib" class="bibtex" style="display: none;">
@ARTICLE{10093771,
author={Zheng, Yushan and Li, Jun and Shi, Jun and Xie, Fengying and Huai, Jianguo and Cao, Ming and Jiang, Zhiguo},
journal={IEEE Transactions on Medical Imaging},
title={Kernel Attention Transformer for Histopathology Whole Slide Image Analysis and Assistant Cancer Diagnosis},
year={2023},
volume={42},
number={9},
pages={2726-2739},
keywords={Transformers;Histopathology;Feature extraction;Kernel;Cancer;Task analysis;Training;WSI;transformer;cross-attention;gastric cancer;endometrial cancer},
doi={10.1109/TMI.2023.3264781}}
</pre>
<script language="javascript" type="text/javascript" xml:space="preserve">
hideblock('ZhengTMI2023Abs');
hideblock('ZhengTMI2023Bib');
</script>
</td>
</tr> <!-- Paper End Here -->
</tbody></table>
</div>
<div class="get-more" align="right"><a href="publications.html"> More </a></div>
</div>
</introduce>
</div>
<!-- Slid Page -->
<div class="col-md-4 sidebar-gutter">
<aside>
<!-- sidebar-widget -->
<div class="sidebar-widget">
<div class="widget-container widget-main">
<img src="images/photo/JiangZG2.jpg" alt="JiangZG's photo">
<h4>Zhiguo Jiang</h4>
<div class="author-title">Professor</div>
<p>
<b>Address:</b> 9 South-3rd Street, Shahe University Park, Changping District, Beijing, 102206, China<br>
<b>E-mail:</b> <a href="mailto:[email protected]">[email protected]</a><br>
<!--
<b>Tel:</b> TBA<br>
<b>Fax:</b> TBA<br>
-->
<b>Office:</b> D721, Main Building<br>
</p>
</div>
</div>
<!-- sidebar-widget -->
<div class="sidebar-widget">
<h3 class="sidebar-title">Researchers</h3>
<div class="widget-container">
<article class="widget-block">
<div class="block-image"> <img src="images/photo/ZhangHP.jpg" alt="ZhangHP's photo"> </div>
<div class="block-body">
<h2><a href="https://haopzhang.github.io/" target="_blank">Haopeng Zhang <i class="fa fa-external-link"></i></a></h2>
<div class="icon-meta">
<span><i class="fa fa-graduation-cap"></i>Associate Professor</span> <span><i class="fa fa-clock-o"></i> </span>
<br><span><i class="fa fa-envelope-o"></i> <a href="mailto:[email protected]">[email protected]</a></span>
</div>
</div>
</article>
<article class="widget-block">
<div class="block-image"> <img src="images/photo/XieFY.jpg" alt="ZhangYS's photo"> </div>
<div class="block-body">
<h2><a href="http://www.sa.buaa.edu.cn/info/1014/4773.htm" target="_blank">Fengying Xie <i class="fa fa-external-link"></i></a></h2>
<div class="icon-meta">
<span><i class="fa fa-graduation-cap"></i>Professor</span> <span><i class="fa fa-clock-o"></i> </span>
<br><span><i class="fa fa-envelope-o"></i> <a href="mailto:[email protected]">[email protected]</a></span>
</div>
</div>
</article>
<article class="widget-block">
<div class="block-image"> <img src="images/photo/ZhaoDP.jpg" alt="ZhangYS's photo"> </div>
<div class="block-body">
<h2><a href="https://shi.buaa.edu.cn/zhaodanpei/zh_CN/index.htm" target="_blank">Danpei Zhao <i class="fa fa-external-link"></i></a></h2>
<div class="icon-meta">
<span><i class="fa fa-graduation-cap"></i>Associate Professor</span> <span><i class="fa fa-clock-o"></i> </span>
<br><span><i class="fa fa-envelope-o"></i> <a href="mailto:[email protected]">[email protected]</a></span>
</div>
</div>
</article>
<article class="widget-block">
<div class="block-image"> <img src="images/photo/ZhengYS.jpg" alt="ZhengYS's photo"> </div>
<div class="block-body">
<h2><a href="https://zhengyushan.github.io" target="_blank">Yushan Zheng <i class="fa fa-external-link"></i></a></h2>
<div class="icon-meta">
<span><i class="fa fa-graduation-cap"></i>Associate Professor</span> <span><i class="fa fa-clock-o"></i> </span>
<br><span><i class="fa fa-envelope-o"></i> <a href="mailto:[email protected]">[email protected]</a></span>
</div>
</div>
</article>
</div>
</div>
<!-- sidebar-widget -->
<div class="sidebar-widget">
<h3 class="sidebar-title">Contact Us</h3>
<div class="widget-container">
<p>
<b>Address:</b> D208, Main Building, 9 South-3rd Street, Shahe University Park, Changping District, Beijing, 102206, China<br>
<!--
<b>Tel:</b> TBA<br>
<b>Fax:</b> TBA<br>
-->
</p>
</div>
</div>
<!-- sidebar-widget -->
<div class="sidebar-widget">
<h3 class="sidebar-title">Related Links</h3>
<div class="widget-container">
<ul style="list-style: none; padding-left: 10px;">
<li><i class="fa fa-external-link"></i> <a href="https://shi.buaa.edu.cn/zhaodanpei/zh_CN/index.htm" target="_blank">Zhao's home page</a></li>
<li><i class="fa fa-external-link"></i> <a href="http://www.sa.buaa.edu.cn/info/1014/4773.htm" target="_blank">Xie's home page</a></li>
<li><i class="fa fa-external-link"></i> <a href="https://haopzhang.github.io/" target="_blank">Zhang's home page</a></li>
<li><i class="fa fa-external-link"></i> <a href="https://zhengyushan.github.io" target="_blank">Zheng's home page</a></li>
</ul>
</div>
</div>
</aside>
</div>
</div>
</section>
</div><!-- /.container -->
<footer class="footer">
<div class="footer-bottom">
<i class="fa fa-copyright"></i> ````````````````1
Copyright 2018. All rights reserved.<br>
<!-- <i class="fa fa-anchor"></i> <a href="index_x.html"><b>X!</b><i class="fa fa-sign-in"></i></a> -->
</div>
</footer>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="scripts/jquery.min.js"></script>
<script src="scripts/bootstrap.min.js"></script>
<script src="scripts/jquery.bxslider.js"></script>
<script src="scripts/mooz.scripts.min.js"></script>
<script src="scripts/togglehide.js"></script>
</body>
</html>