Universidade de Lisboa, Portugal
Mário A. T. Figueiredo received MSc and PhD degrees in electrical and computer engineering, both from Instituto Superior Técnico (IST), the engineering school of the University of Lisbon, in 1990 and 1994.
He has been with the faculty of the Department of Electrical and Computer Engineering, IST, since 1994, where he is now a Full Professor. He is also area coordinator and group leader at Instituto de Telecomunicações, a private non-profit research institute. His research interests include image processing and analysis, machine learning, and optimization. Mário Figueiredo is a Fellow of the IEEE and of the IAPR and is included in the 2014 and 2015 Thomson Reuters' Highly Cited Researchers lists; he received the 1995 Portuguese IBM Scientific Prize, the 2008 UTL/Santander-Totta Scientific Prize, the 2011 IEEE Signal Processing Society Best Paper Award, the 2014 IEEE W. R. G. Baker Award, and several conference best paper awards. He is/was associate editor of several journals (among others, the IEEE Transactions on Image Processing, IEEE Transactions on Pattern Analysis and Machine Intelligence, SIAM Journal on Imaging Sciences, Journal of Mathematical Imaging and Vision) and served as organizer or program committee member of many international conferences.
“Modern Optimization in Image Reconstruction: Some Recent Highlights”
Modern optimization plays a central role in image reconstruction, namely in addressing inverse problems using state-of-the-art regularization. The optimization problems resulting from these formulations are characterized by being non-smooth and usually of very high dimensionality, which has stimulated much research in special purpose algorithms. This talk will present an historical overview of this area, from the first algorithms proposed in the early 2000's to the most recent advances, which are orders of magnitude faster than those early methods. In the last part of the talk, some recent non-convex optimization techniques (namely for blind image deconvolution) will also be addressed.
Khan M. Iftekharuddin
Old Dominion University, USA
Dr. Khan M. Iftekharuddin is a professor and chair of the department of Electrical and Computer Engineering at Old Dominion University (ODU). He serves as the director of ODU Vision Lab. He is also a member of Biomedical Engineering program at ODU. He is the principal author of a book in Image Processing, more than one hundred and eighty refereed journal papers and conference proceedings, and multiple book chapters on computational modeling of bio-inspired and reinforcement learning systems, stochastic computational neuroimaging analysis and machine learning for disease prognosis and prediction, intersection of genomics and radiomics in brain tumor analysis, distortion-invariant and generalized pattern recognition, human and machine centric recognition in behavioral analysis, recurrent deep networks for vision processing and face recognition, emotion detection from speech and discourse, sensor signal acquisition and modeling, and optical computing and interconnection. He is currently serving as an associate editor for several journals including Optical Engineering, IEEE Transaction on Image Processing, International Journal of Imaging, Open Cybernetics and Systemic Journal and International Journal of Tomography and Statistics. He is a fellow of SPIE, a senior member of both IEEE and OSA and a member of IEEE CIS.
"Quantitative Image Analysis for Brian Tumor"
In our earlier works, we demonstrated that multiresolution texture features such as fractal dimension (FD) and multifractional Brownian motion (mBm) offer robust tumor and non-tumor tissue segmentation in brain MRI. We also showed the efficacy of these texture and other intensity features to delineate multiple abnormal tissues. This talk will discuss an integrated quantitative image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. Finally, we evaluate our method using in large scale public and private datasets.
The Hebrew University of Jerusalem, Israel
Leo Joskowicz is a Professor at the School of Engineering and Computer Science at the Hebrew University of Jerusalem, Israel, and the founder and director of the Computer-Aided Surgery and Medical Image Processing Laboratory (CASMIP Lab) since 1996. He obtained his PhD in Computer Science at the Courant Institute of Mathematical Sciences, New York University, in 1988. From 1988 to 1995, he was at the IBM T.J. Watson Research Center, Yorktown Heights, New York, where he conducted research in intelligent computer-aided design and computer-aided orthopaedic surgery. From 2001 to 2009 he was the Director of the Leibniz Center for Research in Computer Science. Prof. Joskowicz is the recipient of the 2010 Maurice E. Muller Award for Excellence in Computer Assisted Surgery by the International Society of Computer Aided Orthopaedic Surgery and the 2007 Kaye Innovation Award, The Hebrew University.
Prof. Joskowicz is a Fellow of the IEEE (Institute of Electrical and Electronic Engineers) and the ASME (American Society of Mechanical Engineers. He is a member of Board of Directors of the MICCAI Society (Medical Image Processing and Computer Aided Intervention) and is part of the Editorial Boards of Computer-Aided Surgery, Medical Image Analysis, Journal of Computer Assisted Radiology and Surgery, Advanced Engineering Informatics, ASME Journal of Computing and Information Science in Engineering, and Annals of Mathematics Artificial Intelligence. He has published over 200 technical works including conference and journal papers, book chapters, and editorials has served on numerous related program committees.
“How is your tumor doing today? Computer-based tumors analysis and follow-up in radiological oncology”
Radiological follow-up of tumors is the cornerstone of modern oncology. About 25% of the 60 million worldwide yearly CT studies are related to oncology, with a higher proportion for brain MRI studies. Currently, radiologists perform the initial diagnosis and subsequent tumor follow-up manually. This evaluation is tedious, time-consuming, and error-prone, as it varies among radiologists and can be can off by up to 50%. These drawbacks hamper the clinical decision-making process and may lead to sub-optimal or inadequate treatment.
In this talk, we will present a new framework for robust, accurate, and automatic or nearly automatic delineation and follow-up of solid tumors in longitudinal multispectral CT and MRI datasets. We will describe new image processing algorithms for brain, lungs, and liver solid tumors and for Plexiform Neurofibromas progression evaluation. We will present the results of our experimental studies and the clinical experience with the software prototype at the Sourasky Medical Center Tel-Aviv.
Joint work with:
L. Weizman, R. Vivanti, D. Helfer, Dr Y. Shoshan, Hebrew U.
Drs. D. Ben Bashat, L. Pratt, L. Ben Sira, B. Shofty and Prof. S. Constantini, Sourasky Medical Center (Ichilov), Tel-Aviv.
Loughborough University, UK
Gerald Schaefer gained his PhD in Computer Vision from the University of East Anglia. He worked at the Colour & Imaging Institute, University of Derby (1997-1999), in the School of Information Systems, University of East Anglia (2000-2001), in the School of Computing and Informatics at Nottingham Trent University (2001-2006), and in the School of Engineering and Applied Science at Aston University (2006-2009) before joining the Department of Computer Science at Loughborough University.
His research interests are mainly in the areas of colour image analysis, image retrieval, physics-based vision, medical imaging, and computational intelligence. He has published extensively in these areas with a total publication count exceeding 400. He is/was a member of the editorial board of more than 20 international journals, has reviewed for over 120 journals and served on the programme committee of more than 400 conferences. He has been invited as keynote or tutorial speaker to numerous conferences, is the organiser of various international workshops and special sessions at conferences, and the editor of several books, conference proceedings and special journal issues.
"Interactive browsing systems for large image collections"
Image collections are growing rapidly and consequently efficient and effective tools to manage them are highly sought after. Content-based approaches are based on the principle of visual similarity derived from image features and seem necessary since most images are unannotated. However, typical content-based retrieval approaches have shown limited usefulness and do not allow full exploration of image collections. In my talk, I will present interactive image database browsing systems as an interesting alternative to direct retrieval approaches. Utilising content-based concepts, large image collections can be visualised based on their mutual visual similarity, while the user is able to interactively explore them by means of various browsing operations. After introducing the main approaches to visualising and browsing image databases, I will focus on some of the systems that we have developed in our lab for this purpose, in particular the Hue Sphere Image Browser and the Honeycomb Image Browser as well their ports to mobile devices.
University of North Carolina at Chapel Hill, USA
Dinggang Shen is a Professor of Radiology, Biomedical Research Imaging Center (BRIC), Computer Science, and Biomedical Engineering in the University of North Carolina at Chapel Hill (UNC-CH). He is currently directing the Center for Image Analysis and Informatics, the Image Display, Enhancement, and Analysis (IDEA) Lab in the Department of Radiology, and also the medical image analysis core in the BRIC. He was a tenure-track assistant professor in the University of Pennsylvanian (UPenn), and a faculty member in the Johns Hopkins University. Dr. Shen’s research interests include medical image analysis, computer vision, and pattern recognition. He has published >700 papers in the international journals and conference proceedings. He serves as an editorial board member for six international journals. He has also served in the Board of Directors, The Medical Image Computing and Computer Assisted Intervention (MICCAI) Society, from Jan 2012 to Dec 2015.
“Machine Learning in Medical Imaging Analysis”
This talk will summarize our recently developed machine learning techniques, including sparse learning and deep learning, for various applications in medical imaging. Specifically, in neuroimaging field, we have developed an automatic tissue segmentation method for the first-year brain images with the goal of early detection of autism such as before 1 year old, and also a novel multivariate classification method for early diagnosis of Alzheimer’s Disease (AD) with the goal of potential early treatment. In image reconstruction field, we have developed a sparse learning method for reconstructing 7T-like MRI from 3T MRI for enhancing image quality, and also another novel sparse learning technique for estimation of standard-dose PET image from low-dose PET and MRI data. Finally, in cancer radiotherapy field, we have developed an innovative regression-guided deformable model to automatically segment pelvic organs from single planning CT which is currently done manually, as well as a novel image synthesis technique for estimating CT from MRI for current new direction of MRI-based dose planning (and also for PET attenuation correction in the case of using PET/MRI scanner). All these techniques and applications will be discussed in this talk.
University of Wisconsin at Milwaukee, USA
Dr. Zeyun Yu received his B.S. in mathematics from Beijing University and Ph.D. in computer science from University of Texas at Austin. He is currently an associate professor in the Department of Electrical Engineering and Computer Science at the University of Wisconsin – Milwaukee (UWM). He established and currently direct the Biomedical Modeling and Visualization Lab at UWM (https://pantherfile.uwm.edu/yuz/www/bmv/). His research, primarily supported by National Institute of Health, involves generating high-quality 3D computational models of biological structures using advanced image processing, computer graphics, and scientific computing methods. Since 2012, Dr. Yu has been serving as an associate editor of the Journal of Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, and on the editorial boards of several other journals. He has co-organized several sessions in international and domestic workshops on image and geometric processing, and has been on the program committees of numerous conferences. He has published research work in more than 80 journals, conferences or book chapters, and has been an invited reviewer for over 30 journals. Dr. Yu is a guest professor at Chinese Academy of Science and Medical College of Chong Qing (China), working on computer algorithm and software developments for geometric and image processing in advanced 3D printing.
"High-Quality 3D Reconstruction of Microscopic Samples via Sparse-Dense Image Correspondence"
Scanning Electron Microscopy (SEM) has been one of several principal imaging tools for structural investigations in such fields as biomedical, mechanical, and materials sciences. Despite the high resolution of captured images, the images still remain two dimensional (2D). Truly three-dimensional reconstructions can provide much richer and often required information for direct 3D visualization, quantification, structure-based animation/simulation. While much work has been done to reconstruct the 3D surface from multiple 2D views of an object, little attention has been paid specifically to 3D SEM image reconstruction and it applications in various areas.
In this talk, a novel framework for use of sparse-dense correspondence is introduced and investigated for 3D reconstruction from multi-view SEM images. Multiple SEM images of microscopic samples are captured by tilting the specimen stage by known angles. Then, each pair of stereo SEM images are rectified by using sparse Scale Invariant Feature Transform (SIFT) features/descriptors to ensure a coarse horizontal disparity between corresponding points. This step is followed by dense correspondence using dense SIFT descriptors. With a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization, we can faithfully compute horizontal/vertical disparity maps. Given the low energy of vertical disparity as the result of rectification process and also the tilt angle of the specimen stage between acquisition of multi-view micrographs, the depth can be recovered. Extensive investigations show the strength of the proposed method for high-quality reconstruction of microscopic samples.