Software Defined System on Chip environments for Industrial Internet of Things

Giulio_Corradi

Keynote Speaker: Giulio Corradi, Xilinx, Munich.

Abstract: With the advent of smarter systems and the drive towards ‘The Internet of Things’, most new products now leverage Systems on Chip (SoC) based system platforms for continuous innovation and product differentiation. Looking forward soon available leading edge 7nm technology will provide even higher integration to fit more circuit elements into single chips making them enormously capable but also complex to program. As matter of fact designers of such platforms, are faced with increasing challenges on how partitioning their code and functions into the SoC. Heterogeneous SoC supporting Programmable Logic in combination with Application Processors, Realtime Processors can ease the solution, but may introduce churn between hardware and software team when looking to optimal partition, acceleration, code offload and the best way for its required connectivity. New instruments are necessary with support of robust methodologies. This talk introduces new heterogeneous platform like Zynq Ultrascale+, how to make such large systems manageable using SDSoc environment for system level profiling, automated software acceleration in programmable logic, automated system connectivity generation, and libraries to speed programming. In light of parallelism the talk will introduce the SDAccel, an architecturally optimized compiler allowing software developers to optimize and compile streaming, low-latency, and custom data path applications using any combination of OpenCL, C, C++, and kernels. The talk will show also some examples of Industrial Internet of things

Short bio

Dr. Giulio Corradi is ISM (Industrial Scientific Medical) Sr. System Architect in Munich, Germany. He brings 25 year of experience of management, software engineering and development of ASICs and FPGA in industrial, automation and medical systems specifically in the field of control and communication for the major corporations.

DSP algorithms, applied chromatography, motor control, real-time communication, and functional safety have been his major focus. In the years 1997 – 2005 he managed several Research European Funded projects for train communication networking and wireless remote diagnostic systems.

Between 2000-2005 he headed the IEC61375 conformance test standard. In 2006 Giulio joined Xilinx Munich office contributing to the Industrial Networking and Motor Control Xilinx reference platforms, providing customer guidance on functional safety applications. In his spare time he practices swimming and playing piano.

HPC and Data Analytics Infrastructure for the Human Brain Project

Thomas_Lippert

Keynote Speaker: Thomas Lippert, IAS, Jülich.

Abstract: HBP, the human brain project, is one of two European flagship projects foreseen to run for 10 years. The HBP aims at creating an open neuroscience driven infrastructure for simulation and big data aided modelling and research with a credible user program. The goal of the HBP is to progressively understand structure and functionality of the human brain, strongly based on a reverse engineering philosophy. In addition, it aims at advancements in digital computing by means of brain inspired algorithms with the potential to create completely novel analogue computing technology called neuromorphic computing. The HBP simulation and data analytics infrastructure will be based on a federation of supercomputer and data centers contributing to specific requirements of neuroscience in a complementary manner. It will encompass a variety of simulation services and data analytics services ranging from the molecular level towards synaptic and neuronal levels up to cognitive and robotic models. The major challenge is that HBP research will require exascale capabilities for computing, data integration and data analytics. Mastering these challenges amounts to an enormous interdisciplinary software and hardware co-design effort including neuroscientists, physicists, mathematicians, and computer scientists on an international scale. The HBP is a long-term endeavor and thus puts large emphasis on educational and training aspects. The maturity of a service is critical, and it is important to differentiate between an early prototype, the development phase, and the delivery of services, in order to assess capability levels. The services and infrastructures of the HBP will successively include more European partners, in particular PRACE sites, and will be made available step by step to the neuroscience and computer science community.

Short bio

Thomas Lippert is director of the Institute for Advanced Simulation and head of the Jülich Supercomputing Centre at Forschungszentrum Jülich, were he has created a simulation and data support laboratory for neuroscience together with colleagues from the institute for medicine.

He is spokesman of the Programme on Supercomputing & Big Data in the research field Key Technologies of the German Helmholtz Association and he is chair of the German Gauss Centre for Supercomputing (GCS). On the European level he is director of the HPC platform within the human brain flagship project (HBP), he coordinates the series of Europe-funded implementation projects for the Partnership for Advanced Computing in Europe (PRACE) as well as the exascale hardware projects DEEP and DEEP-ER.

Thomas Lippert holds the chair for Computational Theoretical Physics at the University of Wuppertal. His research interests include high precision simulations of elementary particles, numerical and parallel algorithms, cluster computing hardware and software, and quantum information processing.

The role of Ubicomp toward Sustainable Futures

Adrian_Friday

Keynote Speaker: Adrian Friday, Lancaster University, Lancaster UK.

Abstract: To say that we are living at a time where unprecedented threats to our survival may be mediated or undermined by digital technology is no exaggeration. From climate change to national security, digital tools’ strengths bring unparalleled potential for understanding and controlling our world in new ways. Yet, they simultaneously put us at risk of catastrophic harm from ill designed, privacy invasive, and insecure technologies. ICT itself now accounts for 10% of global energy demand - and climbing - controlling this impact is not yet a factor in systems design or in most CS curricula. I’m drawn by Computer Science's potential for addressing large scale societal challenges, such as climate change. In this talk I firstly offer a glimpse at the insights for Ubicomp and human-computer system design through the lens of our recent studies of energy use in the home, and of mobile data demand; and secondly, discuss ways in which we might evolve such systems to more profoundly challenge ‘the normal way’ energy is used.

Short bio

Adrian Friday is an active researcher with over 20 years’ experience in developing and studying infrastructure for real world ubiquitous systems, from the early origins of mobile computing in the 1990s through to his longitudinal ‘in the wild’ studies: e.g., since 2006, creating a unique research testbed of over 30 networked public displays still in daily use by end users in the UK, US and Europe.

He has extensive experience of leading collaborative and multidisciplinary research and was one of the Principal Investigators in Equator, a high impact UK wide interdisciplinary initiative, 2001-7, and a co-investigator in the EU funded PDNET and Recall FET projects. Adrian is widely published and cited in the international research community, with over 120 peer reviewed articles to date. He was TPC chair of the leading Ubicomp & Pervasive conferences in 2006 and 2009, and general co-chair of Ubicomp 2014.

His recent work is at the intersection of Computing and Sustainability. He has brought several disciplines together to explore energy use in the home from the system level to understanding its context and large scale GHG impacts. This has led to several recent accolades, including a sustainability award at Pervasive 2010, and a best paper award at CHI 2015.

Webpage

Design Space Exploration and Application Autotuning for Runtime Adaptivity in Multicore Architectures

Cristina_Silvano

Keynote Speaker: Cristina Silvano, Politecnico di Milano.

Abstract: Given the increasing complexity of multi/manycore architectures, a wide range of architecture parameters must be tuned at design-time to find the best tradeoffs in terms of multiple metrics such as energy and delay. Given the huge design space of manycore architectures, automatic design space exploration is necessary to systematically support at design-time the exploration and the comparison of the design alternatives in terms of multiple competing objectives. At runtime, manycore architectures offer a set of resources that can be assigned and managed dynamically to get a specified Quality of Service. Applications can expose to the runtime a set of software knobs (including application parameters, code transformations and code variants) to trade-off Quality of Results and Throughput. Resource management and application autotuning are key issues for enabling computing systems to operate close to optimal efficiency by adjusting their behavior in the face of changing conditions, operating environments, usage contexts and resource availability while meeting the requirements on energy-efficiency and Quality-of-Service.

This talk will present multi-objective DSE techniques for many-core architectures. The key techniques include a set of sampling and optimization techniques for finding Pareto points and Design of Experiment techniques to identify the experimentation plan. Machine learning techniques can be used to obtain a prediction of the system behavior based on the set of training data generated by DoE. This talk also presents an application autotuning framework to tune the software knobs in an adaptive multi-application scenario. To support this scenario, where different applications are running concurrently on the same platform, the system resources should be assigned and managed efficiently to the active applications. The approach exploits the concept of orthogonality between application autotuning and runtime management of system resources to support multiple adaptive applications. Overall, the main challenge is to exploit design-time and run-time concepts to lead to an effective way of “self-aware” computing.

Short bio

Cristina Silvano is an Associate Professor (with tenure) of Computer Engineering at the Politecnico di Milano. She received her MS degree (Laurea) in Electrical Engineering from Politecnico di Milano in 1987. From 1987 to 1996, she was Senior Design Engineer at the R&D Labs of Group Bull in Pregnana Milanese (Italy) and Visiting Engineer at Bull R&D Labs in Billerica (US) (1988-89) and at IBM Somerset Design Center, Austin (US) (1993-1994). She received her Ph.D. in Computer Engineering from the University of Brescia in 1999.

She was Assistant Professor of Computer Science at the University of Milano (2000 -2002) and then Associate Professor at the Politecnico di Milano (2002-present). Her primary research interests focus on computer architectures and electronic design automation, with particular emphasis on power-aware design for embedded systems, design space exploration and runtime resource management for many-core architectures. Her research has been funded by several national and international projects. In particular, she was Principal Investigator of some industrial funded research projects in collaboration with STMicroelectronics.

She is currently Project Coordinator for the H2020-FET-HPC ANTAREX European project on autotuning and adaptivity for energy-efficient exascale High Performance Computing systems. She has published more than 140 papers in premier international journals and conferences. She was co-editor of two scientific books edited by Springer in 2010 and 2011.

Tweets de @CSE2015_Conf