Poor choices of evaluation measures are holding back AI and leading to biased easily-spoofed
classifiers.
It is important to bias learners toward recognizing the “right” kind of feature and ignoring the
“wrong” kind of artefact. It is essential too to understand the nature of bias – in particular, when
a human-like bias is desirable, or a different bias is more appropriate. If you want a neural
network to provide similar results to a human, then using the same biases is appropriate – and
failure to do so leads to easily-spoofed classifiers. But if you are solving a problem that people
aren’t good at, then a human-like bias will be counterproductive. Do we even know what biases our
neural network algorithms have? Or what biases in the data they are able to deal with appropriately?
The challenge for the immediate future is to design learning algorithms and neural networks that
optimize the right thing from the start, rather than trying ad hoc defensive or adversarial
techniques to patch the problem post hoc. Every bias implies a (class of) model, and every model
represents a distillation of (often invalid) assumptions and biases. This applies both to models of
distributions of classes, as well as to models of errors or attacks.
Click here to see the extended Abstract
Bio:
David Powers' homepage
David Powers is Professor of Artificial Intelligence and Cognitive Science at Flinders University in
Adelaide, South Australia, and is recognized as a pioneer in several areas of Artificial
Intelligence, Biomedical and Robotic Engineering, and Parallel Computing. Prof. Powers organized the
first events in Computational Natural Language Learning and founded SIGNLL and CoNLL, the peak
association and conference in that area. His intelligent computing technology has formed the basis
for eight startups, selling under brands including Clipsal Homespeak, Clevertar, YourAmigo and
YourAnswer. Many of his AI applications and much of current research is focussed on assistive and
educational technology, helping people to overcome a variety of ageing and health related including
autism spectrum disorders, addiction, dementia, quadraplegia and locked-in syndrome. He has authored
around 300 scientific papers, as well as both edited and authored several books.
Multimodal Emotion Recognition Using Deep Learning
Bao-Liang Lu
blu@cs.sjtu.edu.cn
Abstract:
Emotion plays an important role in human and human communications in our daily life. Besides logical
intelligence, emotional intelligence is considered as an important part of human intelligence, which
represents the ability to perceive, understand, and response to emotion. However, the existing
human-computer interaction systems still lack emotional intelligence. Emotion recognition aim to
narrow the communication gap between human and machine by developing computational models of
emotion. In this talk, I present our recent work on multimodal emotion recognition using deep
learning, including an emotion recognition framework using EEG and eye movements to model both
internal cognitive states and external subconscious behaviors of human, dealing with individual
difference across subjects and non-stationary characteristic of EEG signals using transfer learning,
and a GAN-based data augmentation method for enhancing the performance of multimodal emotion
recognition models.
Bio:
Bao-Liang Lu' homepage
Bao-Liang Lu received the Ph.D. degree in electrical engineering from Kyoto University, Kyoto,
Japan, in 1994. From April 1994 to March 1999, He was a Frontier Researcher at the Bio-Mimetic
Control Research Center, the Institute of Physical and Chemical Research (RIKEN), Japan. From April
1999 to August 2002, he joined the RIKEN Brain Science Institute, Japan, as a research scientist.
Since August 2002, he has been a full professor at the Department of Computer Science and
Engineering, Shanghai Jiao Tong University, China. He is director of the Center for Brain-Like
Computing and Machine Intelligence and of the Key Laboratory of Shanghai Education Commission
Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University. His research
interests include brain-like computing, neural network, machine learning, brain-computer interface
and affect computing. He received the IEEE Transactions on Autonomous Mental Development Outstanding
Paper Award from the IEEE Computational Intelligence Society in 2018. He was the past President of
the Asia Pacific Neural Network Assembly and the general Chair of the 18th International Conference
on Neural Information Processing. He is a Steering Committee Member of IEEE Transactions on
Affective Computing, Associate Editor of IEEE Transactions on Cognitive and Developmental Systems,
and a senior member of the IEEE.
Deep learning and knowledge representation in brain-inspired spiking neural
networks
Nikola Kasabov
Abstract:
The talk argues and demonstrates that the third generation of artificial neural networks,
the spiking neural networks (SNN), can be used to design brain-inspired architectures that
are not only capable of deep learning of temporal or spatio-temporal data, but also enabling
the extraction of deep knowledge representation from the learned data. Similarly to how the
brain learns time-space data, these SNN models do not need to be restricted in number of
layers, neurons in each layer, etc. When a SNN model is designed to follow a brain template,
knowledge transfer between humans and machines in both directions becomes possible through
the creation of brain-inspired Brain-Computer Interfaces (BCI). The presented approach is
illustrated on an exemplar SNN architecture NeuCube (free software and open source available
from www.kedri.aut.ac.nz/neucube) and case studies of brain and environmental data modelling
and knowledge representation using incremental and transfer learning algorithms These
include predictive modelling of EEG and fMRI data measuring cognitive processes and response
to treatment, AD prediction, BCI for neuro-rehabilitation, human-human and human-VR
communication, audio-visual data learning, hyper-scanning and other. More details can be
found in the recent book: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired
Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.
Sat 14th 16:30-17:10 Session: #6.4.1 Room: Clontarf Room
Topic: Invited Talk - Frank Neumann
Evolutionary Diversity Optimisation
Frank Neumann
Abstract:
Evolutionary computing techniques have been applied to a wide range of complex engineering
problems. In many engineering applications and in the field of algorithm selection /
configuration, it is beneficial to produce a set of solutions that is (1) of high quality
and (2) diverse with respect to the search space and/or some features of the given problem.
Evolutionary Diversity Optimisation enables the computation of a large variety of new and
innovative solutions that are unlikely to be produced by traditional evolutionary
computation methods for single-objective or multi-objective optimisation.
In this talk, I will give an overview on evolutionary diversity optimisation which is a new
important research area within evolutionary computation that aims to provide sets of diverse
solutions. Furthermore, I will show how this approach can be used in combination with neural
networks and outline some directions for future work.
Physical Reservoir Computing Devices: Truly Neural Hardware in the AI and
Sensor-Network Era
Akira Hirose, Ryosho Nakane and Gouhei Tanaka
Abstract:
In this invited talk, we discuss desirable architectures of truly neural hardware. We
revisit the principles of neural processing, that is, "pattern information representation"
and "pattern information processing," which is very different from the principles of von
Neumann-type computers, namely, "symbol information representation" and "symbol information
processing." From this viewpoint, we discuss physical reservoir computing, with which we
will be able to resolve the inconsistency between neural artificial intelligence (neuro-AI)
and its present hardware. This leads to various practical merits such as extremely low
energy consumption in edge computing. In particular, we focus on wave-based hardware such as
lightwave and spin-wave reservoir computing. We also refer to complex-valued neural networks
as the framework for such wave-based neural networks. We will find that wave-based physical
reservoir computing devices bring neural hardware researches to a completely new stage.
Classification and regression algorithms on streaming data
Leszek Rutkowski
Abstract:
This talk presents a unique approach to stream data mining. Unlike the vast majority of
previous approaches, which are largely based on heuristics, it highlights methods and
algorithms that are mathematically justified. First, it describes how to adapt static
decision trees to accommodate data streams; in this regard, new splitting criteria are
developed to guarantee that they are asymptotically equivalent to the classical batch tree.
Moreover, new decision trees are designed, leading to the original concept of hybrid trees.
In turn, nonparametric techniques based on Parzen kernels and orthogonal series are employed
to address concept drift in the problem of non-stationary regressions and classification in
a time-varying environment. Lastly, an extremely challenging problem that involves designing
ensembles and automatically choosing their sizes is described and solved. The content of the
talk will be beneficial for researchers and practitioners who deal with stream data, e.g. in
telecommunication, banking, and sensor networks. The material presented in the lecture is
based on the recent book: Leszek Rutkowski, Maciej Jaworski, Piotr Duda, “Stream Data Mining: Algorithms and Their
Probabilistic Properties”, Studies in Big Data, Springer 2020.
Exploiting Latent Space in Deep Learning for Practical Applications
Sung-Bae Cho
Abstract:
Latent space in a deep learning model provides an important representation on the problem at
hand, which propels efficient data analysis and high performance classification. Variational
autoencoder (VAE) is the most popular to explore the latent space using variational
inference, but has limitations for practical application as it only approximates the latent
space with a simple Gaussian distribution. In this talk, we present several techniques to
exploit the latent space by defining it with a mixture of multivariate Gaussian
distributions and a distribution like constant curvature manifold (CCM). In order to verify
them, we present three applications such as predicting electric power demand, detecting
malware for cyber security, and detecting anomalies in video sequences: 1) the
state-explainable autoencoder for encoding power demand up to the present and transcribing
them into the latent space, 2) the latent space predefined with a mixture of multivariate
Gaussian distribution for enhancing the performance of malware generation and detection, and
3) an adversarial autoencoder based on CCM for detecting various outliers in the
surveillance video.