Prof. Björn Schuller, Imperial College London, UK, and University of Augsburg, Germany
Abstract: Intelligent Human-Machine Communication and Interaction has benefitted largely from the developments in deep learning over the last years. In this tutorial lecture, we shall deal with the according change in the human signal processing landscape. This includes moving from expert-crafted features to transferred and self-learnt representations and architectures of deep neural networks, from traditional pre-processing to deep source separation for the enhancement of signals of interest, to advanced back-end decision making. To enable these approaches, we shall discuss convolutional and recurrent network topologies and related topics such as attention modelling, or connectionist temporal classification. In addition, we will discuss learning data-augmentation such as by generative adversarial methods. To move towards real-life application in challenging HCI frameworks, we will further explore optimal fusion of modalities, life-long learning approaches, and combinations of active and reinforced learning to best model users over repeated interactions. Examples of application will focus on spoken language and video-based communication such as from the domain of Affective Computing.
Bio: Björn Schuller (IEEE M 2005 – IEEE SM 2015 – IEEE Fellow 2018) received the Diploma in 1999, the Doctoral degree for thestudy on Automatic Speech and Emotion Recognitionin 2006, and the Habilitation and Adjunct TeachingProfessorship in the subject area of Signal Process-ing and Machine Intelligence in 2012, all from theTechnische Universit ̈at Munchen, Munich, Germany,all in electrical engineering and information technol-ogy. He is currently a Reader in machine learningwith the Department of Computing, Imperial CollegeLondon, London, U.K., the Full Professor and Headof the Chair of Embedded Intelligence for Health Care and Wellbeing, Augs-burg University, Augsburg, Germany, and Centre Digitisation.Bavaria, Garch-ing, Germany, and an Associate of the Swiss Center for Affective Sciences withthe University of Geneva, Geneva, Switzerland. He (co)authored 5 books andmore than 600 publications in peer reviewed books, journals, and conferenceproceedings leading to more than 18 000 citations (h-index=65). Prof. Schulleris the President-Emeritus of the Association for the Advancement of AffectiveComputing, elected member of the IEEE Speech and Language ProcessingTechnical Committee, and member of the ACM and ISCA.
Robust and privacy preserving multimodal learning with body-camera signalsProf. Andrea Cavallaro, Queen Mary University of London and The Alan Turing Institute, UK
Abstract: High-quality miniature cameras and associated sensors, such as microphones and inertial measurement units, are increasingly worn by people and embedded in robots. The pervasiveness of these ego-centric sensors is offering countless opportunities in developing new applications and in improving services through the recognition of intentions, actions, activities and interactions. However, despite this richness in sensing modalities, inferences from ego-centric data are challenging due to unconventional and rapidly changing capturing conditions. Furthermore, personal data generated by and through these sensors facilitate non-consensual, non-essential inferences when data are shared with social media services and health apps. In this talk I will first present the main challenges in learning, classifying and processing body-camera signals and then show how exploiting multiple modalities helps address these challenges. In particular, I will discuss action recognition, audio-visual person re-identification and scene recognition as specific application examples using ego-centric data. Finally, I will show how to design on-device machine learning models and feature learning frameworks that enable privacy-preserving services.
Bio: Andrea Cavallaro is Professor of Multimedia Signal Processing and the founding Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He is Fellow of the International Association for Pattern Recognition (IAPR) and Turing Fellow at The Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering Teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Editor-in-Chief of Signal Processing: Image Communication; Chair of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee; an IEEE Signal Processing Society Distinguished Lecturer; and an elected member of the IEEE Video Signal Processing and Communication Technical Committee. He is Senior Area Editor for the IEEE Transactions on Image Processing and Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology. He is a past Area Editor for the IEEE Signal Processing Magazine (2012-2014) and past Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010), IEEE Signal Processing Magazine (2008-2011) and IEEE Multimedia. He is a past elected member of the IEEE Multimedia Signal Processing Technical Committee and past chair of the Awards committee of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee. Prof. Cavallaro has published over 270 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).
Current and Future Applications of Brain-Computer InterfacesDr. Christoph Guger, Founder and CEO of g.tec medical engineering GmbH, Austria
Abstract: Brain-computer interfaces are realized with non-invasive and invasive technology and for many different applications. The talk will show the principles that can be used and will highlight important applications like stroke rehabilitation, assessment of brain functions in patients with disorders of consciousness and the functional mapping of the eloquent cortex with high-gamma activity in neurosurgery applications. Furthermore, closed-loop experiments with brain and body stimulation will be explained and results of international research projects for controlling avatars by thoughts will be shown. New ideas coming from the international br41n.io BCI Hackathon series will be explained.
Workshop: A “Running real-time brain-computer interface experiments” workshop will be organized to show the correct setup of a brain-computer interface. This includes the correct assembling of the EEG electrodes and the data quality control. Then the brain-computer interface will be calibrated with user specific EEG response. This calibration signal can be used to control applications like a Speller, to paint or to control a Sphero robot. The programming environment allows to configure new BCI applications in .NET, C#, Python or from MATLAB/Simulink.
Bio: Christoph Guger is the founder and CEO of g.tec medical engineering GmbH. He studied Biomedical Engineering at the Technical University of Graz, Austria and the John Hopkins University in Baltimore, USA. During his studies, he concentrated on BCI systems and developed many of the early foundations for bio-signal acquisition and processing in real-time. g.tec produces and develops BCIs that help disabled people communicate or control their environments by their thoughts and regain motor functions after stroke. The technology is also used to optimize neurosurgical procedures with high-gamma mapping techniques. He is running several international BCI research projects.
Brain Computer Interface Systems – Overview, Design Challenges and Recent Research DevelopmentsProf. Sadasivan Puthusserypady, Department of Health Technology, Technical University of Denmark, Denmark
Abstract: Brain Computer Interface (BCI) systems directly uses the brain signals, especially the electroencephalogram (EEG) to allow the users to operate an external device (computer/machine) without any muscles or peripheral nerves. It translates the EEG signals into comprehensive commands which are necessary to run an external machine or game interface. BCI is an assistive technology and has been used in a wide variety of physical and mental disorders, thus improving their life quality by providing new opportunities to them. This is achieved by incorporating real-time signal processing methods for the feature extraction and classification in EEG-based BCIs. This tutorial will provide an overview of EEG-based BCI systems, EEG signals that can be detected as markers of mental activity, signal processing challenges in feature extraction and classification algorithms. The tutorial will also highlight some of our group’s recent research achievements in devising cost effective, high quality and user friendly BCI systems for disabled as well as elderly people. They include BCI spellers (communication), BCI assisted wheelchair control (locomotion), BCI based schemes for enhancing the attention ability in ADHD children (neurorehabilitation), BCI controlled functional electric stimulation (FES) system for neurorehabilitation of post-stroke patients, to list a few. The tutorial will conclude by highlighting BCI’s potential especially in medical applications.
Bio: Sadasivan Puthusserypady received his B.Tech (Electrical Engineering) and M.Tech (Instrumentation and Control Systems Engineering) from the University of Calicut, India. He obtained his Ph.D. from the Indian Institute of Science, Bangalore, India. He was a Research Associate at the Department of Psychopharmacology, NIMHANS, Bangalore, India, during 1993-1996. From 1996 to 1998, he was a Postdoctoral Research Fellow at the Communications Research Laboratory, McMaster University, Canada. In 1998, he joined Raytheon Systems Canada Ltd., Waterloo, Canada, as a Senior Systems Engineer. In 2000, he moved to the National University of Singapore and worked as an Assistant Professor in the Department of Electrical and Computer Engineering until 2009. Currently, he is an Associate Professor at the Department of Health Technology, Technical University of Denmark. At the University, he holds the position as the BCI group leader. He was a visiting faculty at the School of Electronics, Electrical Engineering and Computer Science, Queens University, Belfast, UK, May-August, 2008 and October 2009. His research interests are in Biomedical Signal Processing, Brain-Computer Interfaces (BCIs) and Home Health Care systems. He has published over 150 research publications in peer reviewed high impact international journals and conferences as well as supervising several master and PhD students. He is a senior member of the IEEE, Associate Editor for the Journal of Medical Imaging and Health Informatics and Journal of Nonlinear Biomedical Physics. He was track chair, program chair and session chair for many international and national conferences. He held the positions as International Advisory Panel Member, Organizing committee member, and TPC member for many international and national conferences.
Towards human-robot symbiosis: control and co-operation with intelligent tools and vehiclesProf. David Abbink, Delft University of Technology, The Netherlands
Abstract: Near-future robot capabilities offer great potential for the next evolution in our society – provided we can effectively control, cooperate and co-exist with this technology. My talk will focus on the research of my group at the Delft Haptics Lab, where we aim to better understand how humans physically perform dynamic control tasks with robotic tools or vehicles, and design multi-modal interfaces that facilitate control and co-operation. Our research consists of iterative cycles of human modelling, interface design and human-in-the-loop evaluation, and my talk will also cover these three elements. First, I will present theory and computational models of the human as an adaptive and learning hierarchical controller that can easily move across strategical, tactical and operational levels of tasks. I will illustrate the power of leveraging understanding of low-level perception-action couplings, developed through techniques from neuroscience and system identification. Second, I will propose design guidelines for effective control and co-operation, with a particular focus on haptic shared control as a means to mitigate traditional human-automation issues. Third, I will highlight some ‘lessons learned’ in human factors experiments, and our search for methods and metrics that capture relevant human control behaviour. These three elements will be illustrated through practical applications of our work – from telerobotic arms operating in complex remote environments, to highly-automated driving. I will end with my personal perspectives for the future of our field, including topics like symbiotic driving (mutually adaptive driver-vehicle interaction, which essentially closes the loop on an iterative design and evaluation cycle), and the responsible integration of robotic technologies in our society.
Bio: Prof. dr. ir. David A. Abbink (1977) received his M.Sc. degree (2002) and Ph.D degree (2006) in Mechanical Engineering from Delft University of Technology. He is currently a full Professor at Delft University of Technology, heading the section of Human-Robot Interaction in the department of Cognitive Robotics. His research interests include system identification, human motor control, shared control, haptic assistance, and human factors. His PhD thesis on haptic support for car-following was awarded the best Dutch Ph.D. dissertation in movement sciences (2006), and contributed to the market release of Nissan’s Distance Control Assist system. David received two prestigious personal grants – VENI (2010) and VIDI (2015) – on haptic shared control for telerobotics and vehicle control. He was co-PI on the H-Haptics programme, where 16 PhD students and 3 postdocs collaborate on designing human-centered haptic shared control for a wide variety of applications. His work has received funding from Nissan, Renault, Boeing, and the Dutch Science Foundation. He and his team have received multiple awards for scientific contributions. David was voted best teacher of his department for seven consecutive years, and best teacher of his faculty twice. David is an IEEE senior member, served as associate editor for IEEE Transaction on Human-Machine Systems, and IEEE Transactions on Haptics, and co-founded of the IEEE SMC Technical Committee on Shared Control.
User Authentication for Natural User Interfaces (NUIs)Prof. Janusz Konrad, Boston University, Department of Electrical and Computer Engineering, Boston, MA, USA
Abstract: The landscape of user interfaces has dramatically changed in the last two decades and transformed the way we interact with computing devices. We have become accustomed to interaction involving touch, speech or gesture, and use them daily on smartphones, laptops, smart speakers, wearables and AR/VR headsets. The interfaces that leverage these natural, intuitive, everyday user behaviors are known as Natural User Interfaces (NUIs). As NUI-enabled devices become prevalent, the risk of exposing private information will increase for such devices can be used to authorize various transactions, e.g., smartwatches and smart speakers now enable e-payments while airline smartphone apps allow updating passport information. Clearly, convenient and robust authentication mechanisms are needed to protect sensitive information and prevent unauthorized access. I will present a new authentication taxonomy for NUIs and support it with examples. Then, I will describe recent authentication methods developed at Boston University and New York University. First, I will focus on user authentication from 2-D touch gestures performed on a multi-touch screen. Then, I will discuss authentication from 3-D free-space gestures performed in front of a depth camera, using hand and fingers, limbs and even the whole body. Compared to traditional authentication methods based on memory, such as alphanumeric passwords, these new approaches require low mental effort and, compared to biometric methods, such as face, fingerprint or iris recognition, are renewable (can be changed if compromised). These are very appealing traits given the deluge of passwords one has to memorize today.
Bio: Janusz Konrad received the master’s degree from the Technical University of Szczecin, Poland, and the Ph.D. degree from McGill University, Montreal, QC, Canada. From 1990 to 2000 he was a post-doctoral fellow and then a faculty member at INRS-Telecommunications, Montreal. Since 2000 he has been on faculty of Boston University, Boston, MA, USA. He is an IEEE Fellow and recently served as a Distinguished Lecturer of the IEEE Signal Processing Society. He is a co-recipient of the 2001 IEEE Signal Processing Magazine Award, the 2004–2005 EURASIP Image Communications Best Paper Award, the AVSS-2010 Best Paper Award and the 2010 ICPR Winner of Aerial View Activity Classification Challenge. He has been actively involved in the IEEE Signal Processing Society. He was the General Chair of AVSS-2013 and the Technical Program Co-Chair of ICIP-2000 and AVSS-2010, and also served as a Member-at-Large of the SPS Conference Board. He is currently a Senior Editor of the IEEE Transactions on Image Processing and a member of the IEEE AVSS Conference Steering Committee. In the past, he served as an Area Editor of the EURASIP Signal Processing: Image Communications journal, and an Associate Editor of the IEEE Transactions on Image Processing, IEEE Communication Magazine and IEEE Signal Processing Letters, as well as the EURASIP Journal on Image and Video Processing. His research interests include video processing and computer vision, visual sensor networks, stereoscopic and 3-D imaging and displays, and human–computer interfaces..
A Tour to Deep Neural Network ArchitecturesProf. Sergios Theodoridis, Dept. of Informatics and Telecommunication, National and Kapodistrian University of Athens, Greece
Abstract: In this short course, a brief tour to the Deep Neural Networks “land” will be attempted. We will begin from the “beginning”. Neural networks will be “visited”, starting from their late 19th century “spring”, with the discovery of the neuron, and then we will “stop” at the major milestones. The artificial neuron, the perceptron and the multilayer feedforward NN will be the very first to “look” at. Backpropagation and some up to date related optimization algorithms will be discussed. Nonlinear activation functions will be presented, in the context of their effect on the training algorithm’s convergence. In the sequel, techniques guarding against overfitting will be outlined, such as the dropout approach. The final path will evolve along the most modern advances in the terrain. Convolutional networks (CNN) and recurrent neural networks (RNN) will be “visited” and discussed. Adversarial examples, generative adversarial networks (GANs) and the basics on capsule modules will also be part of the tour. If time allows, some “bridges” will be established that bring together deep networks and the Bayesian spirit.
Bio: Prof. Sergios Theodoridis is currently Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the National and Kapodistrian University of Athens and he is the holder of a part time Chair at the Chinese University of Hong Kong, Shenzhen. His research interests lie in the areas of Adaptive Algorithms, Distributed and Sparsity-Aware Learning, Machine Learning and Pattern Recognition, Signal Processing and Learning for Bio-Medical Applications and Audio Processing and Retrieval. He is the author of the book “Machine Learning: A Bayesian and Optimization Perspective” Academic Press, 2nd Edition, 2020, the co-author of the best-selling book “Pattern Recognition”, Academic Press, 4th ed. 2009, the co-author of the book “Introduction to Pattern Recognition: A MATLAB Approach”, Academic Press, 2010, the co-editor of the book “Efficient Algorithms for Signal Processing and System Identification”, Prentice Hall 1993, and the co-author of three books in Greek, two of them for the Greek Open University. He is the co-author of seven papers that have received Best Paper Awards including the 2014 IEEE Signal Processing Magazine Best Paper award and the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award. He is the recipient of the 2017 EURASIP Athanasios Papoulis Award, the 2014 IEEE Signal Processing Society Education Award and the 2014 EURASIP Meritorious Service Award. He has served as a Distinguished Lecturer for the IEEE Signal Processing as well as the Circuits and Systems Societies. He was Otto Monstead Guest Professor, Technical University of Denmark, 2012, and holder of the Excellence Chair, Dept. of Signal Processing and Communications, University Carlos III, Madrid, Spain, 2011. He currently serves as Vice President IEEE Signal Processing Society. He has served as President of the European Association for Signal Processing (EURASIP), as a member of the Board of Governors for the IEEE Circuits and Systems (CAS) Society, as a member of the Board of Governors (Member-at-Large) of the IEEE SP Society and as a Chair of the Signal Processing Theory and Methods (SPTM) technical committee of IEEE SPS. He has served as Editor-in-Chief for the IEEE Transactions on Signal Processing. He is Editor-in-Chief for the Signal Processing Book Series, Academic Press and co-Editor in Chief for the E-Reference Signal Processing, Elsevier. He is Fellow of IET, a Corresponding Fellow of the Royal Society of Edinburgh (RSE), a Fellow of EURASIP and a Life Fellow of IEEE.