research_findings1.html 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314
  1. <!doctype html>
  2. <html>
  3. <head>
  4. <meta charset="utf-8">
  5. <meta content="研究成果" http-equiv="keywords">
  6. <meta name="description" content="研究成果">
  7. <meta name="applicable-device" content="pc,mobile">
  8. <meta http-equiv="Cache-Control" content="no-siteapp">
  9. <meta http-equiv="Cache-Control" content="no-transform">
  10. <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
  11. <title>研究成果</title>
  12. <link rel="stylesheet" href="css/index.css" type="text/css">
  13. <link rel="stylesheet" type="text/css" href="css/children.css">
  14. <script type="text/javascript" src="js/jquery-1.11.0.min.js"></script>
  15. <script type="text/javascript" src="js/jquery.SuperSlide.2.1.1.js"></script>
  16. <script type="text/javascript" src="js/public.js"></script>
  17. <!-- 菜单js等-->
  18. <!-- 时间选择js -->
  19. <script type="text/javascript" src="js/laydate.js"></script>
  20. <script src="js/index.js"> </script>
  21. </head>
  22. <body>
  23. <!--nav-->
  24. <div class="nav">
  25. <div class="nav-container">
  26. <div class="nav-top">
  27. <div class="body-container">
  28. <span class="tip">您好,欢迎来到智能医学影像理解研究中心</span>
  29. <div id="needloginparent" class="l">
  30. <a id="needlogin" href="login.html" target="_self">登录</a>
  31. </div>
  32. <div id="logoutparent" class="l" style="padding-left: 10px;">
  33. <a id="needlogout" href="loginout.html" target="_self">退出</a>
  34. </div>
  35. </div>
  36. </div>
  37. <div class="nav-bottom">
  38. <div class="body-container">
  39. <div class="logo-container">
  40. <img class="logo" src="images/ipiu.jpg">
  41. <div class="word">
  42. <div class="name">智能医学影像理解研究中心</div>
  43. <div class="name-es">Intelligent Medical Image Understand Research Center(IMIU)</div>
  44. </div>
  45. </div>
  46. <div class="side-container">
  47. <ul>
  48. <li> <a href="index.html">首 页</a> </li>
  49. <li> <a href="Laboratory_members.html">研究中心成员</a> </li>
  50. <li class="active"> <a href="research_findings1.html">研究成果</a> </li>
  51. <li> <a id="a_datamanage" href="data_manage.html">医学数据管理</a> </li>
  52. <li> <a id="a_show" href="show_search_text.html">医学影像智能解译系统</a> </li>
  53. </ul>
  54. </div>
  55. </div>
  56. </div>
  57. </div>
  58. </div>
  59. <div class="researchFinding">
  60. <div class="banner">
  61. <div class="title wid_main">
  62. <h3>研究成果</h3>
  63. <p>Research results</p>
  64. </div>
  65. </div>
  66. <div class="researchFinding-container">
  67. <div class="researchFinding-content">
  68. <div class="researchFinding-handle">
  69. <div class="title">研究成果</div>
  70. <ul>
  71. <li class="active"><a href="research_findings1.html">论文</a></li>
  72. <li><a href="research_findings2.html">授权发明专利</a></li>
  73. <li><a href="research_findings3.html">获奖</a></li>
  74. </ul>
  75. <!-- 联系信息 -->
  76. <div class="contact">
  77. <h2 class="title">联系我们</h2>
  78. <dl>
  79. <dd>
  80. <img src="./images//version2/Laboratory_members/tel.png">
  81. <a>电话:029-88203744</a>
  82. </dd>
  83. <dd>
  84. <img src="./images/version2/Laboratory_members/email.png">
  85. <p>信箱:</p>
  86. <a>shpgou@mail.xidian.edu.cn</a>
  87. </dd>
  88. <dd>
  89. <img src="./images/version2/Laboratory_members/piont.png">
  90. <a>地址:西安电子科技大学北校区主楼2区415</a>
  91. </dd>
  92. </dl>
  93. </div>
  94. </div>
  95. <div class="info">
  96. <p class="highlight">[1] Shuiping Gou, Yinan Xu, Hua Yang, Nuo Tong, Xiaopeng Zhang, Lichun Wei, Lina Zhao, Minwen Zheng and Wenbo Liu. Automated cervical tumor segmentation on MR images using multi-view feature attention network. Biomedical Signal Processing and Control, Volume 77, August 2022, 103832. </p>
  97. <p>[2] Yuanning Bai , Ruimin Li , Shuiping Gou , Chenchen Zhang, Yaohong Chen, and Zhihui Zheng. Cross-Connected Bidirectional Pyramid Network for Infrared Small-Dim Target Detection. IEEE Geoscience and Remote Sensing Letters. 19(7506405): January 2022. </p>
  98. <p>[3] Jinming Mu, Shuiping Gou, Shasha Mao, Shankui Zheng. A Stepwise Matching Method for Multi-modal Image based on Cascaded Network. ACM Multimedia Conference 2021,</p>
  99. <p>[4] Qing Han, Yunfei Lu, Jie Han , AnLin Luo, LuGuang Huang , Jin Ding , Kui Zhang Zhaohui Zheng, JunFeng Jiaa, Qiang Liang, Shuiping Gou,* and Ping Zhu*. Automatic quantification and grading of hip bone marrow oedema in ankylosing spondylitis based on deep learning. Modern Rheumatology,2021, DOI: https://doi.org/10.1093/mr/roab073 </p>
  100. <p>[5] Jun Zhong, Dong Hai, Jiaxin Cheng, Changzhe Jiao , Shuiping Gou, Yongfeng Liu, Hong Zhou and Wenliang Zhu. Convolutional Autoencoding and Gaussian Mixture Clustering for Unsupervised Beat-to-Beat Heart Rate Estimation of Electrocardiograms fromWearable Sensors,Sensors 2021, 21, 7163. https://doi.org/10.3390/s21217163. </p>
  101. <p>[6] Shuiping Gou, Yunfei Lu, Nuo Tong*, Luguang Huang, Ningtao Liu, Qing Han. Automatic segmentation and grading of ankylosing spondylitis on MR images via lightweight hybrid multi-scale convolutional neural network with reinforcement learning. Physics in Medicine and Biology, 66, 205002, 2021.</p>
  102. <p>[7] Jichao Li, Shuiping Gou, Ruimin Li*, Member, Jiawei Chen, and Xiaolong Sun. Ship Segmentation via Encoder-Decoder Network with Global Attention in High-Resolution SAR Images. IEEE Geoscience and Remote Sensing Letters. 14(8): , AUGUST 2021. </p>
  103. <p>[8] Shasha Mao, Jingyuan Yang, Shuiping Gou*, Licheng Jiao, Tao Xiong, Lin Xiong. Multi-scale Fused SAR Image Registration based on Deep Forest’, Remote Sensing, vol.13(11): 2227, 7 June 2021.(SCI: 672CZ, EI: 03187452127)</p>
  104. <p>[9] Changzhe Jiao , Chao Chen , Shuiping Gou* , Dong Hai ,Bo-Yu Su , Marjorie Skubic , Licheng Jiao ,Alina Zare , and K. C. Ho. Non-Invasive Heart Rate Estimation From Ballistocardiograms Using Bidirectional LSTM Regression. IEEE Journal of Biomedical and Health Informatics. 2021, 25(9): 3396-3407. <br></p>
  105. <p>[10] C. Jiao, C. Chen, S.P. Gou*, L.C. Jiao et al., “L1 Sparsity Regularized Attention Multiple Instance Network for Hyperspectral Target Detection,” IEEE Trans. Cybernetics, Accepted, 2021. (中科院一区期刊,IF: 11.5)</p>
  106. <p>[11] Shasha Mao, Weisi Lin, Licheng Jiao, Shuiping Gou, and Jiawei Chen. "End-to-End Ensemble Learning by Exploiting the Correlation Between Individuals and Weights".IEEE Transactions on Cybernetics, DOI: 10.1109/TCYB.2019.2931071, 51(5): 2835-2846, Apr 2021.</p>
  107. <p>[12] Nuo Tong, Shuiping Gou, Shuzhe Chen, Yao Yao, Shuyuan Yang, Minsong Cao, Amar Kishan, and Ke Sheng*. Multi-task Edge-recalibrated Network for Male Pelvic Multi-Organ Segmentation on CT Images. Physics in Medicine and Biology, 10.1088/1361-6560/abcad9, 2021,66(3),035001</p>
  108. <p>[13] Xinlin Wang , Shuiping Gou , Jichao Li, Yinghai Zhao, Zhen Liu, Changzhe Jiao, Shasha Mao*. Self-paced feature attention fusion network for concealed object detection in millimeter-wave image. IEEE Transactions on Circuits and Systems for Video Technology. DOI: 10.1109/TCSVT.2021.3058246,2021. DOI: 10.1155/2021/6679603.</p>
  109. <p>[14] Huang, LG;Li, MB;Gou, SP;Zhang, XP;Jiang, K. Automated Segmentation Method for Low Field 3D Stomach MRI Using Transferred Learning Image Enhancement Network. BIOMED RESEARCH INTERNATIONA, 2021:1-8</p>
  110. <p>[15] Yao, Y;Gou, SP;Tian, R;Zhang, XR;He, SX. Automated Classification and Segmentation in Colorectal Images Based on Self-Paced Transfe Network. BIOMED RESEARCH INTERNATIONA, 2021:1-8, 10.1155/2021/6683931 </p>
  111. <p>[16] Nuo Tong, Shuiping Gou, Tianye Niu, Shuyuan Yang, Ke Sheng. Self-paced DenseNet with Boundary Constraint for Automated Multi-Organ Segmentation on Abdominal CT Images. Physics in Medicine and Biology, 2020, 65(13).</p>
  112. <p>[17] Gou, Shuiping; Tong, Nuo; Qi, Sharon; Yang, Shuyuan; Chin, Robert; Sheng, Ke. Self-channel-and-spatial-attention neural network for automated multi-organ segmentation on head and neck CT images. Physics in Medicine and Biology. 2020 </p>
  113. <p>[18] Lu Y, Li B, Liu N, Jia-Wei Chen*, Li Xiao, Shuiping Gou*, Linlin Chen, Meiping Huang*, and Jian Zhuang. CT-TEE Image Registration for Surgical Navigation of Congenital Heart Disease Based on a Cycle Adversarial Network. Computational and Mathematical Methods in Medicine, 2020, 4942121, 8 pages. </p>
  114. <p>[19] Gou S P, Liu W, Changzhe Jiao, Haofeng Liu, et al. Gradient Regularized Convolutional Neural Networks for Low-dose CT Image Denoising. Physics in Medicine and Biology, 2019. </p>
  115. <p>[20] Nuo Tong, Shuiping Gou, Shuyuan Yang, Maosen Cao. Shape Constrained Fully Convolutional DenseNet with Adversarial Training for Multi-organ Segmentation on Head and Neck CT and Low Field MR Images. Medical Physics. 46 (1), April 2019, DOI: 10.1002/mp.13553. <br></p>
  116. <p>[21] Shasha Mao, Jiawei Chen, Licheng Jiao, Shuiping Gou, Rongfang Wang. "Maximizing diversity by transformed ensemble learning". Applied Soft Computing, vol.82, 105580, Sep 2019. </p>
  117. <p>[22] Nuo Tong, Shuiping Gou, Shuyuan Yang, Dan Ruan, and Ke Sheng*. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model con strained fully convolutional neural networks. Medical Physics. 45(10), 2018, DOI: 10.1002/mp.13147. </p>
  118. <p>[23] Shuiping Gou*, Linlin Chen, Liyu Huang, Meiping Huang, Jian Zhuang. Large-Deformation Image Registration of CT-TEE for Surgical Navigation of Congenital Heart Disease. Computational and Mathematical Methods in Medicine. Vol 2018, pages 11,ID 4687376. doi.org/10.1155/2018/4687376 </p>
  119. <p>[24] Chen W S, Gou S P*, Wang X L, Li X F, Jiao L C. Classification of PolSAR Images Using Multilayer Autoencoders and a Self-Paced Learning Approach[J]. Remote Sensing, 2018, 10(1): 110.</p>
  120. <p>[25] Wenshuai Chen; Shuiping Gou* ; Xinlin Wang ; Licheng Jiao ; Changzhe Jiao, Alina Zare. Complex Scene Classification of PoLSAR Imagery Based on a Self-Paced Learning Approach. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(12):4818-4825, 2018. </p>
  121. <p>[26] Nuo Tong, Shuiping Gou*, Teng Xu, Ke Sheng, Shuyuan Yang. Nonrigid registration of multimodal medical images based on hybrid model. Digital Medicine, Vol 3(4):178-185, 2017. </p>
  122. <p>[27] Jun Jin, Elizabeth Boehnke-McKenzie, Zhaoyan Fan, Richard Tuli, Ke Sheng, Howard Sandler, Shuiping Gou, and Wensha Yang*. Non-local means denoising of SG-KS-4D-MRI using block matching 3D: Implications for pancreatic tumor registration and segmentation, International Journal of Radiation Oncology, Biology, Physics 2016, 95(3):1058-1066. </p>
  123. <p>[28] Shuiping Gou, Percy Lee, Peng Hu, Jean-Claude Rwigema, Ke Sheng*. Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation. Advances in Radiation Oncology. (2016) 1, 182-193.</p>
  124. <p>[29] Shuiping Gou*, Shuzhen Liu, Yaosheng Wu, Licheng Jiao. Image super-resolution based on the pairwisedictionary selected learning and improvedbilateral regularisation. IET Image processing. 10(2): 101-112, 2016.</p>
  125. <p>[30] Gou S P*, Wang YY,WuJ L, Lee P, Sheng K. Lung dynamic MRI DeblurringUsing Low-rank Decomposition and Dictionary Learning. Medical Physics. 42(4):1917-25. 2015.</p>
  126. <p>[31] Ke Sheng*, Shuiping Gou, Jiaolong Wu, and Sharon X. Qi.Denoised and texture enhanced MVCT to improve soft tissue conspicuity. Medical Physics. 41(10): 101916, 2015.</p>
  127. <p>[32] Shuiping Gou, Jiaolong Wu, Fang Liu.StanislasRapacchi, Peng Hu, Ke Sheng*. Feasibility of automated pancreas segmentation based on dynamic MRI. British Journals of Radiology. 87(1044): 20140248, 2014. </p>
  128. <p>[33] Gou S*, Wang Y, Peng Y, Zhang X, et al. Image Sequences Restoration based on Sparse and Low-rank Decomposition,PLOS ONE, 8(9), pp.1-10, 2013.</p>
  129. <p>[34] Shuiping Gou*, Xiong Zhuang, Yangyang Li, Cong Xu, L C Jiao, Multi-elitist Immune Clonal Quantum Clustering Algorithm, Neurocomputing, 101(4):275-289, 2013. </p>
  130. <p>[35] Whenshuai Chen, Shuiping Gou*, Xinlin Wang, Xiaofeng Li and Licheng Jiao, Classification of PolSAR Images Using Multilayer Autoencoders and a Self-Paced Learning Approach, Remote Sensing, 2018, 10(1): 110. doi:10.3390/rs10010110</p>
  131. <p>[36] Shuiping Gou*, Shuzhen Liu, Shuyuan Yang, Licheng Jiao. Remote Sensing Image Super-resolution Reconstruction Based on Nonlocal Pairwise Dictionaries and Double Regularization,IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,7(12): 4784-4792,2014 </p>
  132. <p> [37] S. P. Gou*, Xin. Qiao, X.R. Zhang, W. F. Wang, F.F. Du, An Eigenvalue Analysis Based Approach for POL-SAR Image Classification, IEEE Transactions on Geoscience and Remote Sensing, 52( 2), pp.805-818, 2014. </p>
  133. <p>38] S. P. Gou*, X. Zhuang, H. M. Zhu, T.T. Yu, Parallel Sparse Spectral Clustering for SARImage Segmentation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(4):1949-1963, 2013 </p>
  134. <p>39] S. Gou*, X. Zhuang, L. Jiao, Quantum Immune Fast Spectral Clustering for SAR Image Segmentation, IEEE Geoscience and Remote Sensing Letters, 9(1):8-12, 2012.</p>
  135. <p>[40] S.P. Gou*, J. Zhang, L.C. Jiao, Fast Immune Greedy Spectral Clustering, Information-An International Interdisciplinary, 15(1):375-385, 2012. </p>
  136. <p>[41] Gou Shuiping*, Feng Jing, Jiao Licheng, Clustering via Dimensional Reduction Method for the Projection Pursuit Based on the ICSA, Journal of electronics, l27(4):474-479, 2011. </p>
  137. <p>[42] S Gou*, S Rapacchi , P Hu , K Sheng , Automated Pancreas Segmentation Based On Dynamic MRI, 56th AAPM Annual metting, July 20-24,2014, Austin, TX, USA. </p>
  138. <p>[43] K Sheng*, S Gou , P Kupelian, M Steiberg, D Low, Detecting Tumors with Extremely Low Contrast in CT Image, 56th AAPM Annual metting, July 20-24,2014, Austin, TX, USA. </p>
  139. <p>[44] J.JIN, E Mckenzie, S Gou*, G. Yang, Non-Local means denoising of SG-KS-4D MRI Improves the accuracy of deformable registration and pancreas tumor segmentation. 2015 International Symposium on AAPM. </p>
  140. <p>[45] N. Shuai, S.Gou*, K, Sheng and T.XU. Pancreas MRI segmentation based on low rank decomposition enhance. The 15th Asia-Oceania Congress of Medical Physics (AOCMP2015) , Nov. 5-8, 2015 </p>
  141. <p>[46] Shuiping Gou*, Guangan Zhuang, Licheng Jiao. Transfer Clustering Based On Dictionary Learning For Images Segmentation. SampTA2011. </p>
  142. <p>[47] Gou, S. P*, Yang Jingyu. Spectral Clustering Based on Dictionary Learning Sampling for Image Segmentation. IScIDE , pp.1-4, 2011. </p>
  143. <p>[48] Nuo Tong, Shuiping Gou*. Gastric Lymph Nodes Detection Based on Visual Saliency and Dictionary Learning. 2016 International Technical Conference of IEEE Region 10. </p>
  144. <p>[49] Linlin Chen, Shuiping Gou*, Yao Yao, Jing Bai. Denoising of Low Dose CT Image with Context-Based BM3D. 2016 International Technical Conference of IEEE Region 10</p>
  145. </div>
  146. </div>
  147. <!-- 底部 -->
  148. <div class="footerbox bg">
  149. <div class="wid_main fix">
  150. <div class="l">
  151. <a href="index.html" target="blank" class="dib vm"><img src="images/ipiu.jpg" width="85" height="85"></a>
  152. </div>
  153. <div class="l txt">
  154. <p>版权所有:西安电子科技大学智能医学影像理解研究中心&nbsp;&nbsp;&nbsp;地址:西安电子科技大学人工智能学院</p>
  155. <p>西安电子科技大学智能医学影像理解研究中心建设和维护 </p>
  156. </div>
  157. </div>
  158. </div>
  159. </div>
  160. <!-- 重要新闻展示不滚动 -->
  161. <style>
  162. .banner{
  163. width: 100%;
  164. height: 248px;
  165. background-color: red;
  166. background-image: url(./images/version2/research_finding/bg.png);
  167. background-repeat: no-repeat;
  168. background-size: 100% 100%;
  169. }
  170. .banner img{
  171. width: 100%;
  172. height: 248px;
  173. display: block;
  174. }
  175. .banner .title{
  176. position: absolute;
  177. top: 45px;
  178. left: 50%;
  179. transform: translateX(-50%);
  180. color: #fff;
  181. }
  182. .banner .title h3{
  183. font-size: 36px;
  184. line-height: 54px;
  185. font-weight: 500;
  186. }
  187. .banner .title p{
  188. font-size: 14px;
  189. line-height: 22px;
  190. }
  191. .researchFinding{
  192. width: 100%;
  193. position: relative;
  194. }
  195. .researchFinding-container{
  196. width: 100%;
  197. position: absolute;
  198. top: 248px;
  199. z-index: 999;
  200. background-image: url(./images/version2/Laboratory_members/bg.png);
  201. background-size: 100% 100%;
  202. background-repeat: no-repeat;
  203. }
  204. .researchFinding-content{
  205. width: 1200px;
  206. min-height: 500px;
  207. background-color: #fff;
  208. margin: -100px auto 0;
  209. box-shadow: 0 0 2px rgba(0,0,0,0.1);
  210. border-radius: 4px;
  211. display: flex;
  212. justify-content: flex-start;
  213. padding: 40px 20px 100px 40px;
  214. box-sizing: border-box;
  215. }
  216. .researchFinding-content .researchFinding-handle{
  217. width: 240px;
  218. height: 650px;
  219. background-color: #005389;
  220. color: #fff;
  221. padding-left: 20px;
  222. margin-right: 57px;
  223. }
  224. .researchFinding-content .researchFinding-handle .title{
  225. font-size: 28px;
  226. line-height: 26px;
  227. color: #fff;
  228. font-weight: bold;
  229. padding: 32px 0;
  230. text-indent: 20px;
  231. }
  232. .researchFinding-content .researchFinding-handle ul>li{
  233. width: 221px;
  234. height: 64px;
  235. line-height: 64px;
  236. background-color: #fff;
  237. padding-left: 20px;
  238. position: relative;
  239. }
  240. .researchFinding-content .researchFinding-handle ul li::after{
  241. content: "";
  242. display: none;
  243. width: 15px;
  244. height: 100%;
  245. position: absolute;
  246. left: -8px;
  247. top: 0;
  248. background-image: url(./images/version2/Laboratory_members/rectangle.png);
  249. background-size: 100% 100%;
  250. background-repeat: no-repeat;
  251. }
  252. .researchFinding-content .researchFinding-handle ul>li a{
  253. color: #333;
  254. font-size: 18px;
  255. }
  256. .researchFinding-content .researchFinding-handle ul>li.active{
  257. background-color: #E0620D;
  258. }
  259. .researchFinding-content .researchFinding-handle ul>li.active a{
  260. color: #fff;
  261. }
  262. .researchFinding-content .researchFinding-handle ul>li.active::after{
  263. display: block;
  264. }
  265. .researchFinding-content .researchFinding-handle .contact .title{
  266. padding: 48px 0 30px 0;
  267. line-height: 1;
  268. font-size: 26px;
  269. }
  270. .researchFinding-content .researchFinding-handle .contact a{
  271. font-size: 14px;
  272. color: #fff;
  273. }
  274. .researchFinding-content .researchFinding-handle .contact dd{
  275. padding-left: 24px;
  276. position: relative;
  277. margin-bottom: 21px;
  278. }
  279. .researchFinding-content .researchFinding-handle .contact dd img{
  280. display: block;
  281. width: 18px;
  282. position: absolute;
  283. left: 0;
  284. top: 4px;
  285. }
  286. .researchFinding-content .info{
  287. max-height: 970px;
  288. overflow-y: auto;
  289. overflow-x: hidden;
  290. }
  291. .researchFinding-content .info p{
  292. font-size: 16px;
  293. line-height: 30px;
  294. color: #333;
  295. padding: 20px 8px 20px 20px;
  296. box-sizing: border-box;
  297. position: relative;
  298. }
  299. .researchFinding-content .info p::after{
  300. content: '';
  301. display: block;
  302. position:absolute;
  303. left: 10px;
  304. top: 32px;
  305. width: 5px;
  306. height: 5px;
  307. background-color: #ccc;
  308. }
  309. .researchFinding-content .info p.highlight{
  310. background-color: rgba(224, 98, 13, 0.1);
  311. }
  312. </style>
  313. </body>
  314. </html>