Prof. Jenq-Neng Hwang received the BS and MS degrees, both in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1981 and 1983 separately. He then received his Ph.D. degree from the University of Southern California. In the summer of 1989, Dr. Hwang joined the Department of Electrical Engineering of the University of Washington in Seattle, where he has been promoted to Full Professor since 1999. He served as the Associate Chair for Research from 2003 to 2005, and from 2011-2015. He is currently the Associate Chair for Global Affairs and International Development in the EE Department. He has written more than 300 journal, conference papers and book chapters in the areas of multimedia signal processing, and multimedia system integration and networking, including an authored textbook on "Multimedia Networking: from Theory to Practice," published by Cambridge University Press. Dr. Hwang has close working relationship with the industry on multimedia signal processing and multimedia networking.
Prof. Hwang received the 1995 IEEE Signal Processing Society's Best Journal Paper Award. He is a founding member of Multimedia Signal Processing Technical Committee of IEEE Signal Processing Society and was the Society's representative to IEEE Neural Network Council from 1996 to 2000. He is currently a member of Multimedia Technical Committee (MMTC) of IEEE Communication Society and also a member of Multimedia Signal Processing Technical Committee (MMSP TC) of IEEE Signal Processing Society. He served as associate editors for IEEE T-SP, T-NN and T-CSVT, T-IP and Signal Processing Magazine(SPM). He is currently on the editorial board of ETRI, IJDMB and JSPS journals. He is the Program Co-Chair of IEEE ICME 2016 and was the Program Co-Chairs of ICASSP 1998 and ISCAS 2009. Dr. Hwang is a fellow of IEEE since 2001.
With the huge amount of networked video cameras installed everywhere nowadays, such as the statically deployed surveillance cameras or the constantly moving dash cameras on the vehicles, there is an urgent need of embedded intelligence for automated tracking and event understanding of video objects. In this talk, I will first present our human tracking approach based on tracking-via-detection and constrained multi-kernel optimization on 3D coordinates through continuous self-calibration of static/moving cameras. These cameras are also continuously learning the temporal and color/texture appearance characteristics among one another in a fully unsupervised manner so that the human tracking across multiple cameras can be effectively integrated and reconstructed via the 3D open map service.
Prof. Hyongsuk Kim (Member, IEEE) received the Ph.D. degree in electrical engineering from the University of Missouri, Columbia, in 1992. Since 1993, he has been a Professor with the Division of Electronics Engineering, Chonbuk National University, Jeonju, Korea. From 2000 to 2002 and again from 2009 to 2010, he was with the Nonlinear Electronics Laboratory, Electrical Engineering and Computer Science (EECS) Department, University of California at Berkeley, as a Visiting Scholar. He had served as an associate editor of IEEE Transactions of Circuits and Systems I in 2003 and II in 2012, respectively. From 2011 until now, he has been working as a Research Specialist in UC, Berkeley. His current research interests are memristors and their applications to Cellular Neural Networks and Deep Learning Neural Networks.
Research on deep learning neural networks achieves a great success, recently, in various applications including vision and speech recognition. The technology relies mainly on software–based digital processing employing huge amount of memories (parameters) and parallel processing with thousands of GPUs. Demand on neural network technologies expands gradually to the fields of mobile applications such as drones, robots, and mobile phones. For such applications, software-based deep learning system is far away from practicality due to its bulky architecture. An evolution of neural network technology toward a concise implementation with analog circuits would be indispensable. One of the highest hurdles in analog circuit implementation of neural networks is its huge amount of artificial neural synapses. As a timely invention, memristor which is regarded as an ideal element to build artificial synapses appeared recently. Its inherent analog multiplication capability and analog programmability make the memristor to perform the synaptic weighting highly efficiently. In this lecture, various features of memristors will be studied, firstly. Then, great potential of memristors to act as artificial neural synapses will be investigated in depth. One weakness of memristors to be used as artificial neural synapses is the fact that its resistance value is always positive. Negative or zero weight is not able to be represented with a single element of memristor. A versatile architecture of memristor bridge synapse which represents negative and zero as well as positive weight has been appeared and paved a way to build memristor-based neural networks. Circuit implementation examples of memristor bridge synapse-based neural nodes and memristor bridge synapse-based multilayer neural networks will be presented. Simulation results on various kinds of application studies of memristor-based neural networks will also be shown.
Dr. Yonggang Wen (S’99-M’08-SM’14) is an associate professor with school of computer engineering at Nanyang Technological University, Singapore. He received his PhD degree in Electrical Engineering and Computer Science (minor in Western Literature) from Massachusetts Institute of Technology (MIT), Cambridge, USA. Previously he has worked in Cisco to lead product development in content delivery network, which had a revenue impact of 3 Billion US dollars globally. Dr. Wen has published over 130 papers in top journals and prestigious conferences. His work in Multi-Screen Cloud Social TV has been featured by global media (more than 1600 news articles from over 29 countries) and received ASEAN ICT Award 2013 (Gold Medal). His work on Cloud3DView, as the only academia entry, has won the Data Centre Dynamics Awards 2015 – APAC. He is a co-recipient of 2015 IEEE Multimedia Best Paper Award, and a co-recipient of Best Paper Awards at EAI/ICST Chinacom 2015, IEEE WCSP 2014, IEEE Globecom 2013 and IEEE EUC 2012. He serves on editorial boards for IEEE Wireless Communications, IEEE Communications Survey & Tutorials, IEEE Transactions on Multimedia, IEEE Transactions on Signal and Information Processing over Networks, IEEE Access Journal and Elsevier Ad Hoc Networks, and was elected as the Chair for IEEE ComSoc Multimedia Communication Technical Committee (2014-2016). His research interests include cloud computing, green data center, big data analytics, multimedia network and mobile computing.
An eminent tussle has emerged between the growing demand for rich applications on mobile devices and the limited supply of onboard resources (e.g., computing capacity, battery, etc). The latter is fundamentally hindered by the physical size of mobile devices as dictated by its mobility nature. In this research, we propose to leverage the seemingly unlimited resources in cloud computing to extend the capability of mobile devices via application offloading. In particular, resource-hungry mobile applications are offloaded to a cloud platform in proximity. The cloud platform provides a virtual machine, as a digital clone of the physical device, to execute dynamically-offloaded tasks. The mobile app offloading approach pivots on two complementary technical contributions. First, previous research efforts have been focusing on exploring alternative mechanisms to implement application offloading, including Cloudlet, Cloud Clone and Weblet, etc. Second, it demands a unified decision-making framework to understand a fundamental trade-off between communication overhead and computation saving, in the presence of inherent system uncertainty (e.g., varying channel conditions, dynamic computing complexity, etc).
The mobile app offloading approach pivots on two complementary technical contributions. First, previous research efforts have been focusing on exploring alternative mechanisms to implement application offloading, including Cloudlet, Cloud Clone and Weblet, etc. Second, it demands a unified decision-making framework to understand a fundamental trade-off between communication overhead and computation saving, in the presence of inherent system uncertainty (e.g., varying channel conditions, dynamic computing complexity, etc).
In this talk, we present a unified mathematical framework to determine whether it is beneficial to offload a mobile app to the cloud in proximity. We model the mobile workflow as a directed graph, whose vertices represent computing load of a specific module and edges represent data exchange between modules. A policy engine determines whether a vertex should be offloaded to the cloud for execution, by trading the computation cost on the mobile device over the communication overhead between the mobile device and the cloud. We formulate this decision-making challenge as a constrained optimization problem, with an objective to minimize a chosen cost metric (e.g., energy consumption) for either the mobile user or the mobile service provider, under the constraint of quality of service (QoS) requirements (e.g., delay deadline). Our analytical framework builds on solving a series of progressively-challenging sub-problems with an increasing complexity of the workflow graph, namely, graph of a single node, linear topology and directed acyclic graph. For each of these sub-problems, we have developed either closed-form solutions or efficient algorithms to provide operational guideline for optimal mobile app offloading.
Our theoretic framework is further verified by two real system developments. In our group, we have developed a multi-screen cloud Social TV system (i.e., Yubigo), allowing video-streaming session to seamlessly migrate across different screens (e.g., TV, laptop and smartphone). It builds upon the clone-based model to offload the session management onto a dedicated service container in the cloud. Our research has also inspired an open-source cloud robotic framework (i.e., Rapyuta), which was developed by EPFL researcher by adopting our proposed clone-based model. Both projects have been spun off as start-up with public and/or private investment.