Technical Programs
Social Events
Workshops
Tutorials
Special Sessions
Panel Sessions
Competitions
Special session proposals are invited to the 2016 IEEE WCCI. Proposals should include the title, aim and scope, list of main topics, and the names and short biography of the organizers.
All special sessions proposals should be submitted to the following Special Sessions Chairs before 15 November 2015.
Dr. Zhi-Hua Zhou (For neural networks and learning systems related topics, hybrid of neural networks, learning systems and computational intelligence technologies). Papers submitted to this special session track (if accepted and presented) will be published in the IJCNN proceedings.
IJCNN 2016 Special Sessions
Dr. Uzay Kaymak (For fuzzy systems related topics, hybrid of fuzzy systems and computational intelligence technologies). Papers submitted to this special session track (if accepted and presented) will be published in the FUZZ-IEEE proceedings.
Dr. Mengjie Zhang (For evolutionary computation related topics, hybrid of evolutionary computation and computational intelligence technologies). Papers submitted to this special session track (if accepted and presented) will be published in the IEEE CEC proceedings.
IEEE CEC 2016 Special Sessions
Dr. Chuan-Kang Ting (For cross-disciplinary and computational intelligence applications). Papers submitted to this cross-disciplinary and CI applications special session track (if accepted and presented) will be published in one of the three conference proceedings (IJCNN, Fuzzy-IEEE, or IEEE CEC) that is most appropriate to the papers. Such decision will be made by the Special Session Organizers in consultation with the Special Session Chair and one of the three Conference Chairs.
IEEE WCCi 2016 Cross-Disciplinary and Computational Intelligence Applications Special Sessions
Organized by Leonid Perlovsky, José F. Fontanari, Asim Roy, Angelo Cangelosi and Daniel Levine
Recent progress opens new directions for modeling the mind and brain and developing cognitive algorithms for engineering
applications. Cognitive algorithms solve traditional engineering problems much better than before, and new areas of engineering
are opened modeling human abilities in cognition, emotion, language, art, music, cultures. Cognitive dissonances and behavioral
economics is another new active area of research. A wealth of data are available about the ways humans perform various cognitive
tasks (e.g., scene and object recognition, language acquisition, interaction of cognition and language, music cognition, cognitive
dissonance) as well as about the biases involved in human judgment and decision making (e.g., the prospect theory and the fuzzy-trace
theory). A wealth of data on the web can be exploited for extracting cognitive data. Explaining these laws and biases using realistic
neural networks architectures, including neural modeling fields, as well as more traditional learning algorithms requires a
multidisciplinary effort.
Scope and Topics
The aim of this special session is to provide a forum for the presentation of the latest data, results, and future research directions on the mathematical modeling of higher cognitive functions using neural networks, neural modeling fields, as well as cognitive algorithms exploiting web data and solving traditional and new emerging engineering problems, including genetic association studies, medical applications, Deep Learning, and Big Data. The special session invites submissions in any of the following areas:
Keywords: Cognition, Emotions, Decision-Making, Dynamic Logic, Language Acquisition, Language Emotionality, Cognitive Dissonance, Music Cognition, Models of Cultures, Neural Modeling Fields, ART Neural Network, Fuzzy-Trace Theory, Prospect Theory, Deep Learning, Genome Associations, Big Data.
Organized by Anna Rakitianskaia and Andries Engelbrecht
Nature-inspired algorithms have been successfully applied to neural network training, neural network
architecture optimization, and neural network architecture construction in the past. Applications of nature-
inspired algorithms to neural networks are diverse and often hybridized with more traditional gradient-descent based methods
. Compared to gradient-based methods, nature-inspired algorithms are less sensitive to weight initialisation, less
likely to become trapped in local optima, and independent of the activation function gradient. Despite the relative success
of nature-inspired algorithms in the neural network context, a solid theoretical foundation for such applications is often lacking.
Successful applications of nature-inspired methods to newer neural network paradigms such as deep learning are yet to be seen.
Some nature-inspired algorithms were shown to suffer from stagnation when applied to neural networks. Optimizing
large real-world neural networks is a challenging task due to the inherent high dimensionality of the weight space, high
correlation between the individual weights, and our limited knowledge of the error landscape properties in high dimensions.
The proposed nature-inspired methods must scale well to high dimensions to be usable in a real-world context.
Scope and Topics
The aim of this special session is to investigate the existing nature-inspired approaches to neural network optimization, to develop new efficient nature-inspired neural network training and architecture optimization algorithms, to encourage discussion of the existing challenges, to identify problems, and to propose solutions. The proposed special session will provide an excellent forum for fellow researchers in this exciting cross-disciplinary field. The topics of the special session include, but are not limited to:
Organized by José García-Rodríguez, Sergio Escalera, Alexandra Psarrou, Isabelle Guyon and Andrew Lewis
Over the last decades there has been an increasing interest in using machine learning methods combined with computer vision techniques to create autonomous systems that solve vision problems in different fields. This special session is designed to serve researchers and developers to publish original, innovative and state-of-the art algorithms and architectures for real time applications in the areas of computer vision, image processing, biometrics, virtual and augmented reality, neural networks, intelligent interfaces and biomimetic object-vision recognition.
This special session provides a platform for academics, developers, and industry-
related researchers belonging to the vast communities of *Neural Networks*,
*Computational Intelligence*, *Machine Learning*, *Biometrics*, *Vision systems*,
and *Robotics *, to discuss, share experience and explore traditional and new areas
of the computer vision and machine learning combined to solve a range of
problems. The objective of the workshop is to integrate the growing international
community of researchers working on the application of Machine Learning
Methods in Vision and Robotics to a fruitful discussion on the evolution and the
benefits of this technology to the society.
Scope and Topics
The Special Session topics can be identified by, but are not limited to, the following subjects:
Organized by Brijesh Verma, and Mohammed Bennamoun
There is a great interest of machine learning algorithms among the computer vision researchers.
Many machine learning algorithms have successfully demonstrated the capability of solving real
world problems in computer vision field. The purpose of the special session on Machine Learning for
Computer Vision is to address the latest developments of machine learning algorithms for numerous
applications in the computer vision.
Scope and Topics
This session aims to bring together machine learning and computer vision researchers to
demonstrate latest progress, emphasize new research questions and collaborate for promising future
research direction.
The theme of the session is the application of machine learning to computer vision. The list of topics
includes and is not restricted to the following:
Organized by Wei-Chang Yeh, Liang Feng and Yew-Soon Ong
Today, neural networks have been widely recognized as useful frameworks to model multidimensional nonlinear relationships.
It has been successfully applied in real-world applications including signal processing, robot control, classification, etc. Recently,
it has also been employed to construct deep architectures for deep learning to model high-level abstractions in data, and achieved
considerable success in applications such as natural language processing, music signal recognition, computer vision and automatic speech
recognition, etc. Despite the success achieved by neural network, constructing multilayer neural networks involves challenging optimization problems,
i.e., finding appropriate architecture and the corresponding optimal weights for some of the core applications of interest.
Evolutionary Computation and Swarm Intelligence are natural inspired heuristic methods with global search capability that have attracted
extensive attentions in the last decades. They have been successfully applied to complex optimization problems including continuous
optimization, combinatorial optimization, constrained optimization, etc. The aim of this special session is to provide a forum for researchers
in the field of neural network to exchange their latest advances in theories, technologies, and practice of optimizing neural networks,
especially with deep and large architecture, using evolutionary computation and swarm intelligence.
Scope and Topics
This Special Session on “Optimizing Neural Networks via Evolutionary Computation and Swarm Intelligence” mainly focus on the
research of exploring Evolutionary Computation and Swarm Intelligence methodologies for optimizing the neural network architectures.
Despite a significant amount of research have been done in neural networks, there remains many open issues and intriguing challenges
in optimizing neural network architectures, especially in todays’ deep learning context, where neural networks usually have many layers
and large number of neurons.
Authors are invited to submit their original and unpublished work in the areas including, but not limited to:
Organized by Erik Cambria, Amir Hussain and Newton Howard
As the Web rapidly evolves, Web users are evolving with it. In an era of social connectedness, people are becoming increasingly
enthusiastic about interacting, sharing, and collaborating through social networks, online communities, blogs, Wikis, and other
online collaborative media. In recent years, this collective intelligence has spread to many different areas, with particular
focus on fields related to everyday life such as commerce, tourism, education, and health, causing the size of the Social Web to
expand exponentially.
The distillation of knowledge from such a large amount of unstructured information, however, is an extremely difficult task, as
the contents of today’s Web are perfectly suitable for human consumption, but remain hardly accessible to machines. The opportunity
to capture the opinions of the general public about social events, political movements, company strategies, marketing campaigns, and
product preferences has raised growing interest both within the scientific community, leading to many exciting open challenges, as
well as in the business world, due to the remarkable benefits to be had from marketing and financial market prediction.
The main aim of this Special Session is to explore the new frontiers of big data computing for opinion mining and sentiment analysis
through computational intelligence techniques, in order to more efficiently retrieve and extract social information from the Web.
Scope and Topics
The Special Session aims to provide an international forum for researchers in the field of big data computing for opinion mining
and sentiment analysis to share information on their latest investigations in social information retrieval and their applications both
in academic research areas and industrial sectors. The broader context of the Special Session comprehends information retrieval, natural
language processing, web mining, semantic web, and computational intelligence. Topics of interest include but are not limited to:
Organized by Shiliang Sun, Yuanbin Wu, Huawen Liu and Yong Ma
Probabilistic models and kernel methods are two of the core techniques in machine learning and pattern recognition. During the past twenty years,
many successful applications based on them have been developed. Probabilistic models are known as a solid framework to model uncertainty and dependency,
while kernel methods provide a powerful approach to modeling nonlinear relationship through the use of linear methods and the kernel trick. These two
types of techniques are closely related and some methods can be understood from both sides. Building good and scalable models and giving effective and
efficient inference methods are important concerns for research on probabilistic models; creatively applying kernel methods to solve problems such as
those in semi-supervised learning, multi-view learning, transfer learning, and multi-task learning is also an active research field. This special session
intends to provide a platform for researchers to discuss and report related progresses.
Scope and Topics
The special session covers theory, models, algorithms and applications for probabilistic models and kernel methods. Typical topics include the following (but not limited to):
Organized by Quan Zou
With the age of big data upon us, machine learning techniques have met tremendous challenges. Data comes from multiple sources and becomes large and
hybrid. Due to the diversity of data acquisition and storage, heterogeneous information is widely used for recording and conveying semantic nowadays.
It appears dramatically on multi-media and bioinformatics data. Advanced machine learning techniques have developed quickly in recent years. Several
impacted new methods were reported in the high-level journals and conferences. For example, affinity propagation was published in Science as a novel
clustering algorithm. Extreme learning machine was proposed in Neurocomputing, and became the most cited and downloaded paper. Recently, deep learning
has become to be the hot topic and seems to be suitable for big multi-media data. Parallel mechanism is also developed by the scholar and industry researchers,
such as Mahout. More and more computer scientists devoted to the advanced large-scale and heterogeneous machine learning techniques. However, application
in real world fell behind the technique growth.
This special session will target the recent large scale machine learning techniques together with application. We especially welcome novel classification and
clustering algorithms, such as learning strategies for large-scale hybrid and heterogeneous data, strategies for large-scale imbalanced learning, strategies
for multiple views learning, strategies for various semi-supervised learning, strategies for multiple kernels learning, etc. Application on multi-media and
biology scalable hybrid data is encouraged. Only machine learning theory without real world application cannot be accepted. We also encourage authors to supply
their codes and open their real data, which would make our issue more impacted. Please do not test your algorithm just only on UCI or benchmark data.
Scope and Topics
The editors expect to collect a set of recent advances in the related topics, to provide a platform for researchers to exchange their innovative ideas and real application data.
Typical topics include the following (but not limited to):
Organized by Yi Lu Murphey, Mahmoud Abou-Nasr, Ishwar K Sethi, Robert Karlsen, Ana Bazzan and Chaomin Luo
The research and development of intelligent vehicles and transportation systems are rapidly growing
worldwide. Intelligent transportation systems are making transformative changes in all aspects of
surface transportation based on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity.
With the decreasing sensor costs and computer chips, and increasing computing power and data storage
capacity, it has become practical to build a host of intelligent devices in cars that can be used in airbag
control, unwelcome intrusion detection, collision warning and avoidance, power management and
navigation, driver alertness monitoring etc. Computational intelligence plays a vital role in building all
types and levels of intelligence in vehicle and transportation systems.
The objective of this special session is to provide a forum for researchers and practitioners to present
advanced research in computational intelligence with a focus on innovative applications to intelligent
vehicle and transportation systems.
Scope and Topics
This session seeks contribution on the latest developments and
emerging research in all aspects of intelligent vehicle and transportation systems. Specific topics for the
session include, but are not limited to:
Organized by Zhaoxiang Zhang, Xiang Bai, Rongrong Ji and Chuanping Hu
Intelligent video surveillance is an important topic in the field of computer vision and
pattern recognition. Significant progress has achieved in the last decades, from object
detection, tracking and parsing to activity recognition and video understanding. With
the development of internet technology and the ubiquitous presence of low-cost sur-
veillance cameras nowadays, surveillance video has become a typical big data, offer-
ing both opportunities and challenges for intelligent video surveillance. On one hand,
the mass data involve more abundant information to mine. On the other hand, it suf-
fers from various difficulties such as noise, label deficiency and computational com-
plexity. This special session focuses on learning methods to achieve high performance
video analysis and understanding under uncontrolled environments in large scale,
which is also a very challenging problem. Moreover, it attracts much attention from
both the academia and the industry. We hope this topic will aggregate top level works
on the new advances in video analysis and understanding from big surveillance data.
Scope and Topics
We will solicit original contributions of researchers and practitioners from the aca-
demia as well as industry, which address a wide range of theoretical and applied is-
sues. The topics of interest include, but are not limited to:
Organized by Cheng-Lin Liu and Zhaoxiang Zhang
Machine learning, with the aim of building intelligent systems by learning model or
knowledge from data, has achieved great progress in the past 30 years. However, a
huge gap of learning ability still exists between machine learning and human learning.
For example, a five-year-old child can identify objects, understand speech and lan-
guage via learning from small number of instances or daily communication, whereas
machines can hardly match this ability even by learning from big data. In recent years,
some researchers have attempted to develop machine learning methods simulating the
human learning behavior. Such methods, called as “Human-like Learning”, have some
features: learning from small supervised data, interactive, all-time incremental (life-
long), exploiting contexts and the correlation between different data sources and
tasks, etc. Some existing learning methods, such as incremental learning, active learn-
ing, transfer learning, domain adaptation, learning with use, multi-task learning, zero-
shot/one-shot learning, can be viewed as special/simplified forms of human-like
learning. The future trend is to make learning methods more flexible and active, re-
quiring less supervision, exploiting all kinds of data more adequately.
Scope and Topics
The topics of interest include, but are not limited to:
Organized by Guoqiang Zhong, Junyu Dong, Xinghui Dong, Hui Yu and Mohamed Cheriet
Deep learning is a topic of broad interest, both to researchers who develop new deep
architectures and learning algorithms, as well as to practitioners who apply deep
learning models to a wide range of applications, from image classification to video
tracking, etc. Brain-like computing combines computational techniques with cognitive
ideas, principles and models inspired by the brain for building information systems
used in humans’ common life. Pattern recognition is a conventional area of artificial
intelligence, which focuses on the recognition of patterns and regularities in data.
Recently, there has been very rapid and impressive progress in these three areas, in
terms of both theories and applications, but many challenges remain. This workshop
aims at bringing together researchers in machine learning and related areas to discuss
the utility of deep learning for brain-like computing and pattern recognition, the
advances, the challenges we face, and to brainstorm about new solutions and
directions.
Scope and Topics
A non-exhaustive list of relevant topics:
Organized by Cristian Rodriguez Rivero, Hector Daniel Patiño, Julian Antonio Pucheta, Gustavo Juarez and Leonardo Franco
Over the past few decades, application of simple statistical procedures with considerable
heuristic or judgmental input was the beginning of forecasting, then in the 80’s, sophisticated
time series models started to be used by some of the dynamic system operators, and these
approaches, were to become pioneering works in this field.
Soft computing methods including support vectors regression (SVR), fuzzy inference system
(FIS) and artificial neural networks (ANN) to time-series forecasting (TSF) has been growing
rapidly in order to unify the field of forecasting and to bridge the gap between theory and
practice, making forecasting useful and relevant for decision-making in many fields of the
sciences.
The purpose of this session is to hold smaller, informal meetings where experts in a particular
field of forecasting can discuss forecasting problems, research, and solutions in the field of
automatic control. There is generally a nominal registration fee associated with attendance.
This session aims to debate in finding solutions for problems facing the field of forecasting. We
wish to hear from people working in different research areas, practitioners, professionals and
academicians involved in this problematic.
Scope and Topics
The session seeks to foster the presentation and discussion of innovative techniques,
implementations and applications of different problems that are Forecasting involved, specially
in real-world problems applied to control and automation.
Organized by Min-Ling Zhang, Fuzhen Zhuang and Bing Han
Supervised learning techniques have been widely used in numerous real-world applications, ranging from information retrieval, multimedia
content analysis, web mining, business intelligence to bioinformatics, research expedition, geological surveillance, public security, and
so on. Traditional supervised learning makes several simplifying assumptions to facilitate the induction of learning systems, such as the
strong supervision assumption that training examples carry sufficient and explicit supervision information, single domain assumption that
training examples come from identical domain, uniform distribution assumption that training examples are class-balanced and have equal
misclassification costs, etc.
Nonetheless, the above simplifying assumptions may not fully hold in practice due to various constraints imposed by physical environment,
problem characteristics, and resource limitations. In recent years, researches on advanced supervised learning techniques have been rapidly
growing to meet the increasing need in learning from data under non-trivial environments. The aim of this special session is to bring
researchers and practitioners who work on various aspects of advanced supervised learning, to discuss on the state-of-the-art and open problems,
to share their expertise and exchange the ideas, and to offer them an opportunity to identify new promising research directions.
Scope and Topics
This special session solicits papers whose topics fall into (but not limited to) the following categories:
Organized by Yu-Feng Li, Sheng-Jun Huang and Min-Ling Zhang
Traditional supervised learning methods typically require the training data are fully labeled. Nowadays, the data size increases with an
unprecedented speed. Fully labeled data becomes infeasible in many real situations, and consequently incomplete labeled data (or data with
weak supervision) is ubiquitously existed. For years various approaches have been developed to learn with weak supervision, and learning from
big data with weak supervision is showing its superiority to learning with fully labeled yet small data. However, there are still many open
problems and in recent years many interesting challenges have been realized. For example, safe semi-supervised learning that prevents unlabeled
data hurting the performance is desired; developing data-adaptive active learning strategies have not fully touched; effective partial label learning
in the presence of class imbalance data; deriving high quality labels from noisy crowds; borrowing supervision from auxiliary sources, etc.
Scope and Topics
The main goal of this session is to provide a forum for researchers in this field to share the latest advantages in theories, algorithms, and
applications on learning with incompletely labeled data. Authors are invited to submit their original work on learning with incompletely labeled data.
The topics of interest include, but are not limited to:
Organized by Bin Liu
With the rapid development of advanced techniques in molecular biology, more and more sequence data has been generated, such as DNA, RNA,
and protein sequences. A difficult and challenging task is how to discover useful knowledge based on the sequence data. To solve this problem,
more and more machine learning techniques have been successfully applied, and valuable information has been uncovered by using these techniques,
for example protein structure and function can be identified based on the protein sequences by using machine learning approaches, such as
Artificial Neural network (ANN), Support Vector Machine (SVM), etc.
Scope and Topics
This special session focuses on exploring and applying advanced machine learning techniques to the field of bioinformatics. The topics include, but not limited to:
Organized by Yasuaki Kuroe, Tohru Nitta and Akira Hirose
The complex-valued neural networks (CVNNs) is a rapidly developing and growing area that has attracted continued interest for the last
decade. The CVNN special session has become a traditional event of the IJCNN conference. Seven special sessions organized since 2006
(WCCI-IJCNN 2006, Vancouver, Porto, WCCI-IJCNN 2008, Hong Kong, IJCNN 2009, Atlanta, WCCI-IJCNN 2010, Barcelona, IJCNN-2011, San Jose,
WCCI-IJCNN 2012, Brisbane, IJCNN-2013, Dallas, IJCNN-2014, Baijing) attracted numerous submissions and had large audiences. They featured
many interesting presentations and very productive discussions.
There are several new directions in CVNN’s development: from formal generalization of the commonly used algorithms to the complex-valued case
that are mathematically richer than regular neurons, to the use of original complex-valued activation functions that can increase significantly
the neuron and network functionality. One of the new trends is the development of quaternion neurons and neural networks. There are also many
interesting applications of CVNNs in pattern recognition and classification, nonlinear filtering, intelligent image processing, brain-computer
interfaces, time series prediction, bioinformatics, robotics, etc.
One of the most important characteristics of the CVNNs is the proper treatment of phase and the information contained in phase, e.g., the treatment
of wave-related rotation-related phenomena such as electromagnetism, light waves, quantum waves and oscillatory phenomena. Very interesting among
other CVNNs are those networks that are based on neurons with the phase-dependent activation functions. This specific phenomenon makes it possible
to increase the single neuron's functionality and to design more flexible and more efficient networks. It is also very interesting to study how the
CVNNs can be used in modeling of the biological neural networks.
IJCNN 2016, which is an integrated part of IEEE WCCI 2016 will be a very attractive forum, where it will be possible to organize a systematic and
comprehensive exchange of ideas in the area, to present the recent research results and to discuss the future trends. We hope that the proposed session
will attract not only the potential speakers, but many new researches interested in joining the CVNNs community. We expect also that this session would
be very beneficial for all computational intelligence researchers and other specialties that are in need of the sophisticated neural networks tools.
Scope and Topics
Papers that are, or might be, related to all aspects of the CVNNs are invited. We welcome contributions on theoretical advances as well as contributions
of applied nature. We also welcome interdisciplinary contributions from other areas that are on the borders of the proposed scope. Topics include, but
are not limited to:
Organized by Zhi-Hui Zhan, Jing-Hui Zhong and Yong Wee Foo
Neural network (NN) has over 50 years of development and has been widely used as an efficient tool for many real-world applications. A typical
NN contains the layers of neurons structure and the weights that connect the neurons. These structure and weight parameters can significantly
impact the performance of NN. Although many learning algorithms have been proposed to optimize the NN parameters, they are still inadequate when
dealing with complex real-world applications. The network modelling problem and optimization problem become more difficult and challenging nowadays
not only because of the new and emerging NN paradigms such as deep learning networks which contain many neurons layers and weights, but also because
of the difficulties in modelling the NN when dealing with challenging real-world applications in complex environments such as cloud and big data.
Therefore, when applying NN to real-world applications such as computer vision, natural language processing, speech recognition, classification,
modeling and prediction, etc, in the cloud and big data era, how to model proper NN to present the problem and how to design approach for optimizing
the NN structure and parameters are still open and significant research topics. Being powerful global optimization tool, evolutionary computation (EC)
algorithms have fast developed in the past two decades and have also been widely applied to various optimization problems. The use of EC algorithms for
optimizing the structure and parameters of NN reported in the literatures has shown promising performance although many open issues remain. This Special
Session is to draw the attentions of researchers in both the communities of NN and EC to exchange their latest advances in theories and technologies of
EC, NN, and the works on modeling the NN for real-world problems, designing the approaches for optimizing the NN, and extending the NN to real-world applications.
Scope and Topics
Evolutionary computation (EC) algorithms contains evolutionary algorithms (EAs) such as genetic algorithm (GA), evolution strategy (ES), evolutionary programming
(EP), estimation of distribution algorithm (EDA), differential evolution (DE), etc, and swarm intelligence (SI) algorithms such as ant colony optimization (ACO),
particle swarm optimization (PSO), artificial bee colony (ABC), brain storm optimization (BSO), etc. These EC algorithms are efficient in optimizing both continuous
optimization problem and/or discrete combinational optimization problems. The neural network (NN) optimization can be modelled as an optimization problem suitable
for EC algorithms. This special session mainly focuses on the researches of NN Model, Approach, and Application. Works on modelling of new NN paradigms, especially
based on the real-world problems, are welcome. Works on designing new optimization approaches, especially on using EC algorithms to optimize the NN structure and/or
weight, are welcome. Moreover, the real-world applications of NN and evolutionary NN (ENN) are welcome. Authors are invited to submit their original and unpublished
work with the topics including, but not limited to:
Organized by Pedro A. Gutiérrez, María PérezOrtiz and Peter Tiňo
Ordinal regression (or ordinal classification) is a relatively new learning problem, where
the objective is to learn a rule to predict labels in an ordinal scale discrete labels endowed
with a natural order. Consider, for example, the case of a teacher who rates student's
performance using A, B, C, D and E, and A>B>C>D>E. Such order information could be
helpful for constructing more robust and fair classifiers and evaluation metrics. On the
other hand, ranking generally refers to those problems where the algorithm is given a set of
ordered labels, and the objective is to learn a rule to rank patterns by using this discrete set
of labels. Many real problems exhibit this structure, e.g. multicriteria decision making,
medicine, risk analysis, university ranking, information retrieval and filtering.
Specific solutions have been recently proposed in the machine learning and pattern
recognition literature for both ordinal regression and ranking problems, resulting in a very
active research field. This special session aims to cover a wide range of approaches and
recent advances in ordinal regression and ranking. We hope that this session can provide a
common forum for researchers and practitioners to exchange their ideas and report their
latest findings in the area.
Scope and Topics
In particular we encourage submissions addressing the following issues:
Organized by Stefano Squartini, Aurelio Uncini, Björn Schuller and Francesco Piazza
Computational Intelligence (CI) techniques are largely used to face complex modelling, prediction, and recognition tasks in different research fields.
One of these is represented by Digital Audio, which finds application in entertainment, security, forensics and health. Anyone can experience a large
variety of servies and products including Digital Audio technologies, undoubtedly characterized by a progressively increase of complexity, interactivity
and intelligence.
The typical methodology adopted in these engineering solutions consists in extracing and manipulating useful information from the audio stream of pilot
the execution of automatized services. Several technical areas in Digital Audio, involving different kinds of audible signals, share such an approach. In
the "music" case study, music information retrieval is the major topic to addresss, with many diverse sub-topics therein; for "speech", we can immediately
refer to speech/speaker recogntion, but also the many diverse topics intimately related to the computational analysis of speech signals ( affective computing
and spoken language processing, just to name a few); in the case of "sound", acousic fingerprint/signature, acoustic monitoring and sound detection/identification
have lately seen an ever increasing interest in the larger field. Moreover, also cross-domain approaches to exploit the information contained in diverse kinds of
environmental audio signals have recently been investigated. In many application contexts, this appears in conjunction with data coming from other media, such as
textual and visual data, for which specific fusion techniques are required.
In dealing with these problems, the adoption fo data-driven learning systems is often a "msut", and the recent success encountered by deep neural architectures
(in Speech Recongnition, for instance) lends further evidence of this. inherent challenges are, however coming with technological issues, due to the presence
of non-stationary operating conditions and hard real-time constraints, often made harder by the big amount of data to process. In some other application contexts,
the challenge is facing a scarce amount of data to be used for training, and suitable architectures and algorithms need to be designated on purpose. Last but not
least, a key issue in Intelligent Audio Applications is given by the capability to learn represntative features at different abstraction layers without the support
of supervised actions. Again, the deep learning paradigm recently allowed reaching relevant achievements in this sense, with many open issues to investigate.
It is indeed of great interst for the scientific community to understand how and to what extent novel Computational Intelligence-based techiques (with special
attention to Neural Network ones) can be efficiently employed in Digital Audio, in the light of all the aforementioned aspects. The aim of the session is
therefore to focus on the most recent advancements in the Computational Intelligence field and on their applicability to Digital Audio problems. Driven by the
sucess encountered at IJCNN2014 in Bejing (China) and IJCNN2015 in Killarney (Ireland), the proposers of this session are highly motivated to revive and exceed
the experience and to build, in the long-term, a solid reference within the Computaional Intelligence community for the Digital Audio field.
Scope and Topics
Topics include, but are not limited to:
Organized by Qian Wang, Jun Shi, Shihui Ying, Manhua Liu and Yonghong Shi
Deep learning has demonstrated its capability for many vision problems, such as face detection
and recognition, image classification, etc. It is expected that this technique can benefit the area
of medical image analysis, as well as imaging-based translational medicine. Though a few
pioneering works can be found in the literature, there are still a lot of unresolved issues when
applying deep learning for medical images.
The goal of special session is to present works that focus on the design and use of deep learning
in medical image analysis as well as imaging-based translational medical studies. This special
session is going to set the trends and identify the challenges of the use of deep learning methods
in the field of medical image. Meanwhile, it is expected to increase the connection between
software developers, specialist researchers and applied end-users from diverse fields.
Scope and Topics
Topics include, but are not limited to:
Organized by Huanhuan Chen, Giacomo Boracchi and Jian Cheng
Over the past few decades, research and applications of sequential data has attracted growing attention from both scientific
and industrial communities. A number of processing techniques have been proposed for sequential data understanding and processing,
e.g. dynamic time warping (DTW), fisher kernel and recurrent neural networks. The main aim of this special session is not only
to explore the new techniques on this area, providing original research with the aim for deeper understanding into the mechanism
of algorithms, but also to encourage exchange of great ideas on sequential learning in different scenarios. We wish to communicate
with people working in different research areas, practitioners, professionals and academicians in this area.
Scope and Topics
Topics include, but are not limited to:
Organized by Yiming Zhang, Shaohe Lv, Xin Niu and Xinwang Liu
This special session aims to promote new advances and research directions to address the clustering problem in large scale practical applications.
Unprecedented technological advances lead to increasingly large scale data sets in all areas of science, engineering and businesses. These include
genomics and proteomics, biomedical imaging, signal processing, astrophysics, finance, web and market basket analysis, among many others. The number
of such data is often of the order of millions or billions. Classical clustering algorithms become inadequate, questionable, or inefficient at best,
and this calls for new clustering algorithms.
Scope and Topics
Topics of interest include theoretical foundations, algorithms and implementation, as well as applications and empirical studies, for example:
Organized by Jinhui Tang and Zechao Li
Conventional multimedia understanding is usually built on top of handcrafted features, which are often
much restrictive in capturing complex multimedia content. Recent progress on deep learning opens an
exciting new era, placing multimedia understanding on a more rigorous foundation with automatically
learned representations to model the multimodal data and the cross-media interactions. Existing studies
have revealed promising results that have greatly advanced the state-of-the-art performance in a series
of multimedia research areas, from the multimedia content analysis, to modeling the interactions
between multimodal data, to multimedia content recommendation systems, to name a few here.
Scope and Topics
This special session aims to provide a forum for the presentation of recent advancements in deep
learning research that directly concerns the multimedia community. For multimedia research, it is
especially important to develop deep learning methods to capture the dependencies between different
genres of data, building joint deep representation for diverse modalities. The list of topics includes and
is not restricted to the following:
Organized by Massimo Panella and Simone Scardapane
In the era of big data and pervasive computing, it is common that datasets are distributed over multiple and
geographically distinct sources of information (e.g. distributed databases). In this respect, a major challenge is designing
adaptive training algorithms in a distributed fashion, with only partial or no reliance on a centralized authority. Indeed,
distributed learning is an important step to handle inference within several research areas, including sensor networks,
parallel and commodity computing, distributed optimization, and many others. Additionally, it generalizes previous
research on training neural and fuzzy neural models over clusters of processors and, as such, it is crucial in designing
training algorithms for efficiently processing large amount of data over networks.
Based on the idea that all the aforementioned research fields share many fundamental questions and mechanisms, this
special session is intended to bring forth advances on distributed training for neural networks. We are interested in
papers proposing novel algorithms and protocols for distributed training under multiple constraints, analyses of their
theoretical aspects, and applications for multiple source data clustering, regression and classification.
Scope and Topics
The topics of interest to be covered by this Special Session include, but are not limited to:
Organized by Giacomo Boracchi and Robi Polikar, Manuel Roveri and Gregory Ditzler
Learning abilities of computational models have been well-researched with promising progress, but a vast majority of these efforts still rely on two fundamental assumptions:
Scope and Topics
Researchers working in any of the related areas of learning in dynamic/nonstationary
environments, concept drift or domain adaptation are encouraged to submit their contributions
to this special session.
The scope of the proposed session includes, but is not limited to:
Organized by Guang‐Bin Huang, Jonathan Wu and Donald C. Wunsch II
Over the past few decades, conventional computational intelligence techniques faced bottlenecks in
learning (e.g., intensive human intervention and time consuming). With the ever increasing demand
of computational power particularly in areas of big data computing, brain science, cognition and
reasoning, emergent computational intelligence techniques such as extreme learning machines (ELM)
offer significant benefits including fast learning speed, ease of implementation and minimal human
intervention.
Extreme Learning Machines (ELM) aim to break the barriers between the conventional artificial
learning techniques and biological learning mechanism. ELM represents a suite of machine learning
techniques for (single and multi‐) hidden layer feedforward neural networks in which hidden neurons
need not be tuned. From ELM theories point of view, the entire multilayers of networks are structured
and ordered, but they may be seemingly ‘‘messy’’ and ‘‘unstructured’’ in a particular layer or neuron
slice. ‘‘Hard wiring’’ can be randomly built locally with full connection or partial connections.
Coexistence of globally structured architectures and locally random hidden neurons happen to have
fundamental learning capabilities of compression, feature learning, clustering, regression and
classification. ELM theories also give theoretical support to local receptive fields in visual systems.
ELM learning theories show that hidden neurons (including biological neurons whose math modelling
may be unknown) (with almost any nonlinear piecewise activation functions) can be randomly
generated independent of training data and application environments, which has recently been
confirmed with concrete biological evidences. ELM theories and algorithms argue that “random
hidden neurons” capture the essence of some brain learning mechanism as well as the intuitive sense
that the efficiency of brain learning need not rely on computing power of neurons. This may somehow
hint at possible reasons why the brain is more intelligent and effective than computers. ELM offers
significant advantages such as fast learning speed, ease of implementation, and minimal human
intervention. ELM has good potential as a viable alternative technique for large‐scale computing and
artificial intelligence.
The need for efficient and fast computational techniques poses many research challenges. This special
session seeks to promote novel research investigations in ELM and related areas.
Scope and Topics
All the original papers related to ELM technique are welcome. Topics of interest include but are not
limited to:
Theories
Organized by Chaomin Luo
Biologically-inspired intelligence technique, an important embranchment of series on computational intelligence, plays a crucial role for robotics.
The autonomous robot and vehicle industry has had an immense impact on our economy and society, and this trend will continue with biologically inspired
neural networks techniques. Biologically-inspired intelligence, such as biologically-inspired neural networks (BNN), is about learning from nature,
which can be applied to the real world robot and vehicle systems. Recently, the research and development of bio-inspired systems for robotic applications
is increasingly expanding worldwide. Biologically-inspired algorithms contain emerging sub-topics such as bio-inspired neural network algorithms,
brain-inspired neural networks, swam intelligence with BNN, ant colony optimization algorithms (ACO) with BNN, bee colony optimization algorithms (BCO),
particle swarm optimization with BNN, immune systems with BNN, and biologically-inspired evolutionary optimization and algorithms, etc. Additionally, it
is decomposed of computational aspects of bio-inspired systems such as machine vision, pattern recognition for robot and vehicle systems, motion control,
motion planning, movement control, sensor-motor coordination, and learning in biological systems for robot and vehicle systems.
This special session seeks to highlight and present the growing interests in emerging research, development and applications in the dynamic and exciting
areas of biologically-inspired algorithms for robot and vehicle systems (autonomous robots, unmanned underwater vehicles, and unmanned aerial vehicles).
Scope and Topics
Original research papers are solicited in related areas of biologically-inspired algorithms for robotics. Submissions to the Special Session should be focused
on theoretical results or innovative applications of computational intelligence of biologically-inspired algorithms (such as BNN) for robot and vehicle systems.
Specific topics for the special session include but are not limited to:
Organized by Daoyi Dong, Dongbin Zhao and Qinmin Yang
Reinforcement learning and approximate dynamic programming can be used to address learning and optimization problems in many areas of engineering and science, including artificial intelligence, control engineering, operation research, psychology, and economy. They have provided critical tools to solve some engineering and science problems in modern complex systems. However, there still exist some challenges in the applications of reinforcement learning and approximate dynamic programming to academic and industrial problems such as the curse of dimensionality and optimization in dynamic environment. At the same time, the development of new technologies such as quantum technology and deep learning provides a remarkable opportunity to revisit these challenges in reinforcement learning and approximate dynamic programming. This special session will focus on relevant topics of reinforcement learning and approximate dynamic programming, and provide a forum for idea exchange in the emerging research area.
Scope and Topics
The aim of this special session will be to provide an account of the state-of-the-art in this fast moving and cross-disciplinary field of reinforcement learning and approximate dynamic programming. It is expected to bring together the researchers in relevant areas to discuss latest progress, propose new research problems for future research. All the original papers related to reinforcement learning (RL) and approximate dynamic programming (ADP) are welcome. Topics of interest include but are not limited to:
Organized by Khan M. Iftekharuddin
Constructive understanding of computational principles of visual information processing, perception and cognition is one of the most fundamental challenges of contemporary science. Deeper insight into biological vision helps to advance intelligent systems research to achieve robust performance similar to biological systems. Biological inspiration indicates that sensory processing, perception, and action are intimately linked at various levels in animal vision. Implementing such integrated principles in artificial systems may help us achieve better, faster and more efficient intelligent systems. This session provides an integrated platform to present original ideas, theory, design, and applications of computational vision.
Scope and Topics
Topics of interest include, but are not limited to the following:
Organized by Tianqing Zhu,, Gang Li and Ping Xiong
Over the past two decades, digital information collected by corporations, organizations and governments has resulted in huge number of datasets, and the speed of such data collection has increased dramatically over the last a few years. A data collector, also known as a curator, is in charge of releasing and publishing data for data mining and some applications. However, most of the collected datasets are personally related and contain private or sensitive information. Even though curators can apply several simple anonymization techniques, there is still a high probability that the sensitive information of individuals has a high probability to be disclosed. Privacy-preserving has therefore become an urgent issue that needs to be addressed.
Research communities have proposed various methods to preserve privacy and have designed a number of metrics to evaluate the privacy level of these methods. The interest in this area is very high and the notion is spanning in a range of research areas, ranging from the privacy community, to the data science communities including machine learning, data mining, statistics and learning theory. Much work has been conducted in a number of application domains, including social network, online education, recommender systems and tourism. A significant number of new technologies and applications have appeared in the privacy-preserving research area. We believe that it is a good time to cover these topics in the special session, which will include recent advances and applications in the privacy preserving research in data mining and diverse applications.
Scope and Topics
All submissions will be rigidly peer reviewed to guarantee the quality. This special session will focus on original articles in relevant topics, which include but are not limited to:
Organized by Jia Wu, Shirui Pan, Peng Zhang, Xingquan Zhu, Chengqi Zhang and Philip S. Yu
Traditional machine learning methods have been commonly used for many applications,
such as text classification, image recognition, video tracking, etc. For learning purposes,
these data are often required to be represented as vectors. However, many other objects
in real-world applications, such as chemical compounds in bio pharmacy, brain regions
in brain network and users in social network, contain rich feature vectors and structure
information. Such a simple feature-vector representation inherently loses the structure
information of the objects. In reality, objects may have complicated characteristics,
depending on how the objects are assessed and characterized. Meanwhile the data may
come from heterogeneous domains, such as traditional tabular-based data, sequential
patterns, social networks, time series information, and semi-structured data. For the
purpose of preserving the information accommodate complicated characteristics of the
objects to adapt to the advanced applications, novel machine learning methods are
desired to learn and discover the meaningful knowledge.
Scope and Topics
This special session expects to solicit contributions on the advanced machine learning
methods and applications from complicated data environment. The topics of interest
include, but are not limited to:
Organized by Zhen Ni, Haibo He, Yan Sun, and Dongbin Zhao
Through the recent years, new computational intelligence and machine
learning methodologies and frameworks have been developed as useful
techniques to address power grid security issues. The blackouts, cascading failures, cyber/physical attacks and other stability issues have attracted researchers from cross-discipline to look into this interdisciplinary topic. Modern and complex power grid operation and safety issues also need the special attention and involvement of people with expertise in aforementioned fields.
This special session will provide a unique platform for researchers from
different societies, including computational intelligence, machine learning, power and energy, cyber security, communications, neuroscience and among others, to share their research experience towards a secure and smart modern power grid. The special session will also enhance the discussion among different communities to explore more challenge cross-discipline topics along this direction.
Scope and Topics
This special session will provide a forum to discuss recent development in power grid security based on all kinds of computational intelligence and machine learning techniques. We are particularly interested in the following topics:
Organized by Guandong Xu, Gang Li and Wu He
Data mining provides educational institutions the capability to explore, visualize and analyze large amounts of data in order to reveal valuable patterns in students' learning behaviors without having to resort to traditional survey methods. Turning raw data into useful information and knowledge also enables educational institutions to improve teaching and learning practices, and to facilitate the decision-making process in educational settings. Thus, it is becoming important for researchers to exploit the abundant data generated by various educational systems for enhancing teaching, learning and decision making.
In addition, the development and training of teachers in regional area can be also improved by adopting smart techniques in the big data age. How to get these tasks done smartly and effectively is another important issue that could be potentially addressed in Big data age.
Scope and Topics
All submissions will be rigidly peer reviewed to guarantee the quality. This special session will focus on original articles in relevant topics, which include but are not limited to:
Organized by Nathan Scott and Nikola Kasabov
Spiking Neural Networks are a rapidly emerging means of neural information processing, drawing inspiration from biological processes. There is presently considerable interest in this topic, especially with the recent announcement of large scale projects such as the “BRAIN Initiative” (US) and the “Human Brain Project” (EU). Due to their inspiration from human brain processes, SNN have the potential to advance technologies and techniques in fields as diverse as medicine, finance, computing, and indeed any field that involves complex spatio-temporal data. SNN can operate on noisy data, in changing enviroment,s at low power and with high effectivness. We believe that this area is quickly establishing itself as an effective alternative to traditional machine learning technologies, and that interest in this area of research is growing rapidly. In this special session we intend to provide a platform for the discussion of contemporary areas of SNN, including theory, applications, and emerging technologies such as neuromorphic hardware.
Scope and Topics
Topics of interest include, but are not limited to the following:
Organized by Mahardhika Pratama, Meng Joo Er, Edwin Lughofer, Wenny Rahayu, Chee-Peng Lim and Ning Wang
Learning from large data streams is a research area of growing interest because large volumes of data are continuously generated from sensors, the Internet, etc., at an increasing high rate. The major difficulty of online learner in learning from large data streams results from uncertainties arising from three causes, namely real-time situations, data distribution and data representation:
Scope and Topics
The main topics of this special session include, but are not limited to, the following:
Organized by Ali Heydari and Kyriakos Vamvoudakis
Cyber-physical systems (CPS), where dynamical systems are subject to computation and/or
communication considerations are emerging in different fields, from aerospace and manufacturing to
healthcare and economics. Control of CPS is different from traditional control due to the required
incorporation of the cyber part of the systems. The part gives rise to concerns about stability and
performance of the system due to issues including communication delays, information losses, limited
communication bandwidths, quantization errors, limited computational resources, and vulnerability to
cyberattacks. These emerging challenges call for new methods and tools for effective control of CPS.
Approximate/adaptive dynamic programming (ADP) as a powerful scheme for approximating solutions to
optimal control problems has shown many potentials in solving this class of problems, in recent years.
Scope and Topics
This special session is aimed at providing a forum for presenting latest developments in the field (control of CPS) using the tool (ADP). Topics of interest include, but are not limited to:
Organized by Andrew Cassidy, Alexander Andreopoulos, Michael DeBole and Arnon Amir
Recent algorithmic advances in deep neural networks have made enormous strides
in accuracy on a wide range of applications and domains (images, video, audio,
speech, natural language, etc). At the same time, emerging work in low-power
computing platforms, inspired by the structure of the brain, have demonstrated
orders of magnitude improvement in computational efficiency and have scaled up
to that point where they can address real-world machine learning challenges.
Combining these two disciplines has the potential to enable truly revolutionary
solutions and applications, ranging from small-scale embedded mobile systems up
to very large-scale scientific and data center installations.
The objective of this special session is to report on work at this intersection,
applying advanced neural network algorithms on efficient computational
architectures to solve complex tasks. Specifically, we seek submissions
reporting concrete measured results in terms of energy/power/throughput
performance with at or near state-of-the-art accuracy on all manner of
datasets. We also welcome research exploring the tradeoffs of accuracy and
energy at the algorithmic and architectural levels as well as optimization of
algorithms for energy efficiency on such platforms. In all cases, papers
that report concrete measured power/energy results will be given higher
consideration than simulated results.
Scope and Topics
This session seeks contribution on the latest developments and emerging research for all aspects of energy-efficient neural networks. Specific topics for the session include, but are not limited to: Topics:
Organized by Patricia Melin
This Special Session is being organized as one of the main activities of the Task Force on Hybrid Intelligent Systems of the NNTC and will consist of papers that integrate different Soft Computing (SC) methodologies for the development of hybrid neural intelligent systems for modeling, simulation and control of non-linear dynamical systems. The goal of the special session is to promote research on hybrid neural systems all over the world, and researchers working on this area are welcome to submit their papers.
SC methodologies at the moment include (at least) Neural Networks, Fuzzy Logic, Genetic Algorithms and Chaos Theory. Each of these methodologies has advantages and disadvantages and many problems have been solved, by using one of these methodologies. However, many real-world complex industrial problems require the integration of several of these methodologies to really achieve the efficiency and accuracy needed in practice.
Scope and Topics
This session will include papers dealing with methods for integrating the different SC methodologies and neural networks in solving real-world problems. The Special Session will consider applications on the following areas:
Organized by Seiichi Ozawa, Cesare Alippi, Sung-Bae Cho and Masahide Nakamura
Thanks to recent advancements in ICT and sensor/actuator technologies, a new generation of
devices is made available to constitute smart-grids/homes/buildings/environments and
cyber-physical systems. Information coming from the field and feedback actions can be utilized
to enriching and improve our lives. In the smart home framework, for example, various sensors
are monitoring human behaviors and their health conditions, electrical consumption of
appliances and power generation from solar panels; all these systems/applications are then
controlled efficiently based on sensed information and inferred human intention. “Smart”
technologies received recently a great interest, with smartification not limited to personal
houses only. In fact, technologies, intelligent processing and actions can be extended to all kinds
of private/public services and our living environments in the large. Smartification would bring
us a new form of Society, a “Smart Society”, which provides not only comfortable but also safe
environments both in the physical and the cyber worlds. Therefore, ensuring safety and security
is also a major issue that a Smart Society has to address and provide. Smart technologies will
also enable the design of applications protecting us from risks, crime and hazards, such as
natural disasters, accidents, terrorism, cyber-attacks to leaking privacy via phishing.
Scope and Topics
The purpose of this special session is to share new research about computational intelligence and its fundamental role in building, operating, and driving a safer and smarter society. As such, we welcome high-quality and unpublished papers focusing on how computational intelligence techniques can shape smart systems, methodologies and their applications. The topics of interest of this special session include, but are not limited to:
Organized by Seiichi Ozawa, Nistor Grozavu and Nicoleta Rogovschi
Thanks to recent advancement of ICT and sensor technologies, data are continuously generated by various kinds of information sources (e.g., SNS, e-mail, POS systems, surveillance cameras, etc.) and such a sequence of data is often called “data stream”. To learn from a data stream in real time, learning must be carried out incrementally in one pass with incoming data. Therefore, fast incremental machine learning algorithms are solicited to learn useful features and classifiers, and predictor. However, learning data streams in one pass is a challenging problem when we have to deal with high-dimensional large-scale data, unbalanced data or data with serious noise/outliers. This Special Session aims for brining and sharing new ideas on new types of incremental learning algorithms under stationary and non-stationary environments.
Scope and Topics
A wide range of incremental online learning methods and applications is covered in this special session, including but not limited to the followings: Theory:
Organized by Alessandro Sperduti, Jose C. Principe and Plamen Angelov
Deep learning models and techniques are becoming more and more
the computational tool of choice when facing difficult applicative problems, such
as speech and image understanding. The reason for this huge interest in deep
learning is due to the fact that their adoption leads to human (and, in some cases,
super-human) performances. These successes, however, have been mainly
obtained on empirical basis, often thanks to the computational power provided by
parallel computer facilities such as GPUs or CPU clusters.
Although some recent works have addressed deep learning from a theoretical
perspective, still there is a limited understanding of why deep architectures work
so well and on how to design computationally efficient and effective training
algorithms.
Scope and Topics
This special session aims to gather together leading scientists in deep learning
and related areas within computational intelligence, neuroscience, machine
learning, artificial intelligence, mathematics, and statistics, interested in all aspects
of deep architectures and deep learning, with a particular emphasis on
understanding fundamental principles.
Topics of interest to the special session include, but are not limited to:
Organized by Friedhelm Schwenker and Stefan Scherer
The proposed special session focuses on neural network-based transfer learning and
knowledge adaptation for pattern recognition problems in human-computer
interaction scenarios. Of particular interest for the special session is the classification
of human behavior patterns and affect.
Scope and Topics
The special session’s topics include but are not limited to:
Organized by Abdulrahman Altahhan, Vasile Palade, Junyu Dong, Xinghui Dong, Hui Yu and Mohamed Cheriet
Deep Learning has been under the focus of neural network research and industrial communities due to its proven ability to scale well into difficult problems and due to its performance breakthroughs over other architectural and learning techniques in important benchmarking problems. This was mainly in the form of improved data representation of supervised learning tasks. Reinforcement learning (RL) is considered the model of choice for problems that involve learning from interaction, where the target is to optimize a long term control strategy or to learn to formulate an optimal policy. Typically these applications involve processing a stream of data coming from different sources that varies between central massive database to pervasive smart sensors (such as the one that is commonly used by a diabetic person or smart home thermostat).
RL do not lend itself naturally to deep learning and currently there is no uniformed approach to combine deep learning with reinforcement learning despite good attempts. Important questions still open; for example how to make the state-action learning process deep? How to make the architecture of an RL system pertains to deep learning without compromising the interactivity of the system? Although recently there have been important advances in dealing with these issues, they are still scattered with no overarching framework that is well defined in a natural way.
This special session will provide a unique platform for researchers from the Deep Learning and Reinforcement Learning communities to share their research experience towards a uniformed Deep Reinforcement Learning (DRL) framework in order to allow this important interdisciplinary branch to take-off on solid grounds. It will concentrate on the potential benefits of the different approaches to combine RL and DL. It aims at bringing more focus to the potential of infusing reinforcement learning framework with deep learning capabilities that could allow it to deal more efficiently with current learning application including but not restricted to online streamed data processing that involves actions.
Scope and Topics
Topics of interest include, but are not limited to the following:
Organized by Saibal Mukhopadhyay and Kaushik Roy
The neuro-inspired and non-Boolean algorithms are emerging as a strong candidate for future computing platforms. The application domain ranges from smart sensor to accelerators in mobile platforms to high-performance systems. The digital CMOS‐based hardware realization of these platforms demonstrates limited energy‐efficiency. Consequently, there is a strong need and interest in exploring beyond‐CMOS technologies for hardware platforms for neuro‐inspired algorithms. The potential technologies include alternative field‐effect‐transistors like Tunneling transistors; emerging memory devices like Resistive RAMs, memristors; and non‐charge-based devices like Spintronics. The focus on this special session is to highlight the recent advancements of application of these emerging devices to various types of neuro‐inspired platforms including associative memory, different neural networks like Cellular Neural Network, Spiking Neural Network, and oscillatory computing. The presented papers will highlight algorithm, architecture, and technology co‐design approaches, and comparative analysis with digital CMOS‐based implementations. The proposed special session will provide analysisforum for fellow researchers in this exciting cross‐disciplinary field.
Scope and Topics
The topics of the special session include, but are not limited to:
Organized by Zhanshan Wang, Guotao Hui and Tieshan Li
An artificial neural network is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. With their remarkable ability to derive meaning from complicated or imprecise data, neural networks can be provided to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. Traditional neural network schemes have been applied to several practical applications, such as continuous stirred tank reactor, manipulator arm, image recognition, image encryption, etc. Nonetheless, numerous other real-time applications, such as secure communication, cordless electric cars, machine tools, memory storage, power grid, contain complicated historical data, complex mechanism characteristics and so on. So this special session aims at disseminating the latest interdisciplinary research on the theory and application of neural networks.
The purpose of this special session is to provide an opportunity for scientists, engineers, and practitioners to propose their latest theoretical and technological achievements in the analysis of dynamics and design of neural networks and application in industry. Besides, this special session aims at bringing together researchers in neural networks (NNs) and related areas to brainstorm about new solutions and directions.
Scope and Topics
Authors are invited to submit their original work on neural networks and related fields. The potential topics include, but are not limited to:
Organized by Mahboobeh Parsapoor and John Brooke
Space weather can be defined as changes in the solar system that causes solar wind and
geomagnetic storms and influence magnetosphere, ionosphere and thermosphere that in turn
causes damage both Ground-based and space-based communication systems and human life.
Scope and Topics
The main goal of this Special Session is to examine Computational Intelligence Paradigms for predicting solar activity and geomagnetic storms to develop alert system based on them. Other aims of this session are to:
Organized by Bao-Liang Lu, Thierry Pun, Milos R. Popovic and Hiroshi Yokoi
Over the last decade, there has been a rising tendency in affective Brain-Computer Interaction (aBCI) research to enhance Brain-Computer Interaction systems with the ability to detect, process, and respond to users emotional states. Besides logical intelligence, the introduction of emotional intelligence into BCI to create aBCI has received increasing interest from interdisciplinary research fields including psychology, neuroscience, computer science, and computational intelligence. In this new domain of affective sciences, aBCI aims to narrow the communication gap between the highly emotional human and the BCI systems by developing computational systems that recognize and respond to human emotions. Various applications of aBCI systems have been proposed such as workload monitoring, driving fatigue detection, implicit affective tagging, and game adaptation.
With the fast development of embedded systems and wearable technology, it is now conceivable to port aBCI systems from laboratory to real-world environments. Various advanced dry electrodes and embedded systems including some commercial products are developed to handle the wearability, portability, and practical use of these systems in real world applications. aBCI includes affective sensing, emotion detection and feedback from brain signals and other physiological activity, which extends the concept of conventional BCI.
Although significant advances have been made and many applications have been proposed, the problem of detecting, modeling and regulating emotions in aBCI systems remains complex and largely unexplored. There exist many critical challenges in aBCI systems. How can we deal with artifacts and noises in uncontrolled real-world environments? How can machines respond to the recognized affective states and bring users to a desired affective state? How can we elicit and measure emotions in social setting? How can we develop adaptive aBCI systems that address individual differences and changing environments? How can we introduce contextual information to aBCI? What are the neural patterns or signatures for different emotional states and how is the stability of computational models over time? The goal of this special session is to connect researchers from related fields to discuss the state-of-the-art progress and enhance inter-disciplinary collaborations in aBCI. We are soliciting original contributions for addressing the above research questions.
Scope and Topics
Topics of interest include but are not limited to:
Organized by Badong Chen and Lei Sun
This special session focuses mainly on various machine learning methods robust to large
outliers (or impulsive noises).
Scope and Topics
Topics of interest include:
Organized by Alfredo Vellido, José D. Martín and Paulo J.G. Lisboa
Personal health is widely seen as the future of healthcare, with a focus on the 4Ps: Prediction, Prevention, Personalisation and Participatory healthcare. This is distinct from pharmacogenomics or stratified therapies, but focuses instead on tracking our health – rather than illness – using wearable sensors and other home/wifi sensors to measure our physical state over time. This is a data-rich context in which Computational Intelligence (CI) can provide a wide range of tools for health-related knowledge extraction. Trends can then be identified which are used for a range of purposes from motivation for regular exercising to prevention and early screening for unexpected deterioration. Where individuals suffer from chronic conditions, this approach can help to target the right intervention at the right time. This topic has links with health vaults, with avatars, as well as supporting care for the specific groups such as the elderly. This is a topic in which data integration is of key importance and different types of data can be brought in to provide context, ranging from demographics to social media.
Scope and Topics
This workshop will focus on methodologies from the fields of Machine Learning (ML) and, in the broadest sense, CI, as well as on prototype applications targeting this emergent area of health care. Suitable topics would include, but are not limited to:
Organized by Yiu-Ming Cheung, Yang Liu Yuping Wang and Ping Guo
Recent advances in storage, hardware, information technology, communication, and
networking have resulted in a large amount of multimedia data. This has powered the
demand to extract useful and actionable insights from such data in an automatic,
reliable and scalable way. Machine learning, which aims to construct algorithms that
can learn from and make predictions on data intelligently, has attracted increasing
attention in the recent years and has been successfully applied to many multimedia
computing tasks, such as image processing, face recognition, video surveillance,
document summarization, etc. Since a lot of machine learning algorithms formulate
the learning tasks as linear, quadratic or semi-definite mathematical programming
problems, optimization becomes a crucial tool and plays a key role in machine
learning and multimedia data analysis tasks. On the other hand, machine learning and
the applications in multimedia computing are not simply the consumers of
optimization technology but a rapidly evolving interdisciplinary research field that is
itself promoting new optimization ideas, models, and solutions.
Scope and Topics
This special session "Advanced Methods in Optimization and Machine Learning for Multimedia Computing" aims to provide a platform for academics and industry-related researchers in the areas of applied mathematics, machine learning, artificial intelligence, pattern recognition, data mining, multimedia processing, and big data to exchange ideas and explore traditional and new areas in optimization and machine learning as well as their applications in multimedia computing. The topics of the special session include, but are not limited to:
Organized by Dongbin Zhao, Yuanheng Zhu and Haibo He
In the past few decades, adaptive dynamic programming (ADP) and reinforcement learning (RL) have been extensively studied from the aspects of both computational intelligence and control communities. Great achievements have been obtained in solving optimal control and providing intelligent solutions for various problems. Multi-agent ADP/RL is a very hot research topic to solve cooperative control problems among agents with multiple inputs. Recently, the input to the agent is extended to higher dimension, e. g., image. Google Deepmind group proposes promising results for video games based on deep reinforcement learning, with videos or images are as the input, which is a great success and inspiration for ADP/RL research domain. In sum, the ADP/RL for high dimensional systems deserves more and more research attention.
Scope and Topics
The aim of this special session is to call for the most advanced research and state-of-the-art works in the field of ADP/RL in high dimensional systems. It is expected to provide a platform for international researchers to exchange ideas and to present their latest research in the relevant topics. All the original papers related to ADP and RL are welcome. Specific topics of interest include but are not limited to:
Organized by Zutong Wang, Baoding Liu, Dan Ralescu and Jiansheng Guo
In order to deal with indeterminacy mathematically, two axiomatic systems have been founded, namely, probability theory and uncertainty theory. When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. In order to rationally deal with personal belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of mathematics.
Scope and Topics
The goal of this special session is to provide an excellent forum for the discussion of the latest theoretical advances and practical applications in this exciting research field, to help foster the understanding, development, and practice of uncertainty theory for solving problems in economy, engineering, management and the social sciences. We invite the submission of high-quality, original and unpublished papers in this area. The topics of interest include, but are not limited to:
Organized by Yafei Song, Zhun-Ga Liu and Xiaodan Wang
Since its inception, belief function theory, also known as Dempster-Shafer theory or evidence theory, has received growing attention in many fields of applications such as finance, technology, biomedicine, etc, despite of its incompetence in combining belief functions with high conflict. To remove the roadblocks in the development of belief function theory, many improvements have been made subsequently, e.g., the present of transferable belief model (TBM) and Dezert-Smarandache theory (DSmT). At present, more and more researchers are dedicated to studying belief function theory from different views for further exploration and better exploitation.
Scope and Topics
This special session is intended to provide the latest advances of belief function theory, the relationship between belief function theory and other theories such as probability theory, possibility theory, rough set theory, and fuzzy set theory, the fusion of imperfect information in the united framework of random sets theory, together with their applications in artificial intelligence, to enhance the development of belief function theory for solving problems in engineering. We invite original submissions of high quality in this area. The topics of interest include, but are not limited to:
Organized by Stefano Aguzzoli, Pietro Codara and Diego Valota
Many-valued logics have constituted for several decades key conceptual tools for the formal description and management of fuzzy, vague and uncertain information. In the last few years, the study of these logical systems has seen a bloom of new research related to the most diverse areas of mathematics and applied sciences. Relevant recent developments in this field are connected to the natural semantics of non-classical events. A nonclassical event is described by a formula in the language of a given manyvalued logic. A satisfying semantics for such events must account for their different aspects, in particular the "ontic" aspect, related to their vague nature, and the "epistemic" aspect, related to our ignorance, or approximate knowledge about them. The combination in a unique conceptual framework of the logic and the probability of a class of non-classical events, usually reached through the algebraic semantics and their topological or combinatorial dualities, provides both the theoreticians and the applicationoriented scholars with powerful tools to deal with this kind of events. This special session is devoted to the most recent development in the realm of many-valued logics, with particular emphasis on theoretical advances related to algebraic or alternative semantics, combinatorial aspects, topological and categorical methods, proof theory and game theory, manyvalued computation. In particular, results directed towards a better understanding of the natural semantics of non-classical events will be appreciated. Further, a special attention is also given to connections and synergies between many-valued logics and other different formal approaches to vague and approximate reasoning, such as Rough Sets, Formal Concept Analysis and Relational Methods.
Scope and Topics
A partial list of topics is the following:
Organized by Qiang Shen, Laszlo Koczy Shyi-Ming Chen and Ying Li
Fuzzy interpolation provides a flexible means to perform reasoning in the presence of insufficient knowledge that is represented as sparse fuzzy rule bases. It enables approximate inferences to be carried out from a rule base that does not cover a given observation. Fuzzy interpolation also provides a way to simplify complex systems models and/or the process of fuzzy rule generation. It allows the reduction of the number of rules needed, thereby speeding up parameter optimisation and runtime efficiency.
Scope and Topics
The aim of this special session is to provide a forum:
Organized by Vladik Kreinovich, Hung T. Nguyen and Juan Carlos Figueroa Garcia
The relation between fuzzy and interval techniques
is well known; e.g., due to the fact that a fuzzy number can be
represented as a nested family of intervals (alpha-cuts),
level-by-level interval techniques are often used to process
fuzzy data.
At present, researchers in fuzzy data processing mainly used
interval techniques originally designed for non-fuzzy
applications, techniques which are often taken from textbooks
and are, therefore, already outperformed by more recent and
more efficient methods.
One of the main objectives of the proposed special session is
to make the fuzzy community at-large better acquainted with the
latest, most efficient interval techniques, especially with
techniques specifically developed for solving fuzzy-related
problems.
Another objective is to combine fuzzy and interval techniques,
so that we will be able to use the combined techniques in
(frequent) practical situations where both types of uncertainty
are present: for example, when some quantities are known with
interval uncertainty (e.g., coming from measurements), while
other quantities are known with fuzzy uncertainty (coming from
expert estimates).
Scope and Topics
The topics of this special session will include but are not limited to:
Organized by Jesús Alcalá-Fdez and José M. Alonso
The term Soft Computing is usually used in reference to a
family of several preexisting techniques (Fuzzy Logic,
Neuro-computing, Probabilistic Reasoning, Evolutionary
Computation, etc.) able to work in a cooperative way,
taking profit from the main advantages of each individual
technique, in order to solve lots of complex real-world
problems for which other techniques are not well suited.
In the last few years, many software tools have been
developed for Soft Computing. Although a lot of them are
commercially distributed, unfortunately only a few tools
are available as open source software. Please, notice that
such open tools have recently reached a high level of
development. As a result, they are ready to play an
important role for industry and academia research.
Scope and Topics
The aim of this session is to provide a forum to disseminate and discuss Software for Soft Computing, with special attention to Fuzzy Systems Software. We want to offer an opportunity for researchers and practitioners to identify new promising research directions in this area. Potential topics of interest include but are not limited to
Organized by Plamen Angelov, Fernando Gomide, Edwin Lughofer and Igor Skrjanc
Evolving systems are modular systems that simultaneously develop their structure, functionality, and parameters in a continuous,
self-organized, one pass adaptive way from data streams.
During the last 12-15 years, the concept of Evolving Fuzzy Systems (EFS) established as a useful and necessary methodology to address
the problems of imprecision, incremental learning, adaptation and evolution of fuzzy Systems in dynamic environments and during on-line/real-time
operation modes. EFS are able to automatically and autonomously adapt themselves to new operating conditions and system states and hence guarantee
a high process safety, especially in case of highly dynamic and time-variant systems. This is especially necessary when precise and sufficient training data
is not available (e.g., because of high costs for data collection or annotation) in order to set up models which cover the whole range of possible system
states. Another major topic which can be addressed with EFS are the building of models from huge massive stream data or even from Big Data, and to serve as
dynamically adaptable knowledge base within enriched human-machine interaction applications (learning and teaching).
Scope and Topics
The goal of the special session is to provide a broad picture of the recent developments and to explore further (open) research challenges in one or several specific research topics mentioned below.
Organized by Tufan Kumbasar and Hao Ying
Type-2 fuzzy logic control is a technology which takes the fundamental concepts in control from type-1 fuzzy logic and expands upon them in order to deal with higher levels of uncertainty presented in many real-world control problems. A variety of control application areas have been addressed with type-2 fuzzy logic, from the control in steel production plants to the control of marine diesel engines and robotic control. For some engineering applications, there is evidence that type-2 fuzzy logic can provide benefits over both traditional forms of control as well as type-1 fuzzy logic. It is the aim of this special session to attract a comprehensive selection of high quality current research in this area of type-2 control, motivating further collaboration and providing a platform for the discussion on future directions of type- 2 fuzzy logic control by active researchers in the field.
Scope and Topics
This special session will address advances in interval type-2 as well as general type-2 fuzzy logic control, including different types of type-2 fuzzy logic control such as the PID type, model-based, neuro-fuzzy and TSK-based. As such, the session aims to provide both an overview of the current research as well as a window into the future of type-2 fuzzy logic control. Topics include, but are not limited to:
Organized by Hamidreza Izadbakhsh, Marzieh Zarinbal and Amir Zarinbal
Simulation modeling is broad collection of methods mainly used to imitate the behaviors of real processes or real systems over time.
These methods have been applied in many areas from operational level to tactical level and strategic level. Major approaches in simulation
modeling could be classified into three categories, discrete-event, system dynamics, and agent based. While, discrete-event modeling is used
in operation and tactical level, system dynamics is mainly used in strategic level, and agent based modeling is being used in all levels.
However, there are many situations, in which a real system could not be modeled using traditional simulation methods. The parameters of the
system are uncertain, the functions are vague, and the distributions are imprecise. In these situations, fuzzy logic could be applied. In
other words, fuzzy simulation methods give more flexibility to handle uncertainties in real situations and have been applied in many areas
such as, layout optimization, scheduling, health risks assessment, intelligent transportation system, advanced traffic management systems,
pension fund optimization, maintenance planning, portfolio selection, etc.
Scope and Topics
Regarding the interest in this area, this special session looks to gather and discuss the latest theoretical and application achievements in analyzing, designing and optimizing simulation modeling using fuzzy logic. Potential topics include but not limited to:
Organized by Christian Wagner and Hani Hagras
Type-2 fuzzy sets and systems are paradigms which seek to realize computationally efficient fuzzy systems with the ability to give excellent performance in the face of highly uncertain conditions. Specifically, type-2 fuzzy sets provide a framework for the comprehensive capturing and modelling of uncertain data, which, together with approaches such as clustering and similarity measures (to name but two) provides strong capability for reasoning about and with uncertain information sources in a variety of contexts and applications. Type-2 fuzzy systems combine the potential of type-2 fuzzy sets with the strengths of rule-based inference in order to provide highly capable inference systems over uncertain data which remain white-box systems (i.e. interpretable).
Scope and Topics
The aim of this special session is to present and focus top quality research in the areas related to the practical aspects and applications of type-2 fuzzy sets and systems. The session will also provide a forum for the academic community and industry to report on recent advances within the type-2 fuzzy sets and systems research. Topics include, but are not limited to:
Organized by Ching-Chih Tsai
In recent years, a trend has emerged in which techniques of computational intelligence; learning control and automation have been integrated into
intelligent control or automation systems on a variety of scales to meet the needs of implementation at the angle of products. Many computational
intelligence and learning methods, including fuzzy control, neural networks, fuzzy neural networks, CMAC, genetic algorithm, artificial immune networks,
swarm particle techniques, ACO, reinforcement learning and etc., have gained successful applications in many industrial control automation fields. In light
of this emerging trend, we propose a special session, called “ fuzzy and intelligent control systems”, at FUZZ-IEEE 2016, in order to promote the advanced
theory, practice, and interdisciplinary aspects of integration of computational intelligence and learning control in the area of intelligent control and
automation systems. This special session aims to disseminate high quality research results regarding not only the theoretic development in integration of
computational intelligence theories and control techniques, but also related effective applications to some new and useful physical systems. In this proposal,
particular attention will be paid to highly selected topics about novel fuzzy, intelligent control and learning methods for uncertain systems.
Organized by Simon Coupland, Robert John and Jonathan Garibaldi
Type-2 fuzzy sets and systems are paradigms which seek to realize computationally efficient fuzzy systems with the ability to give excellent performance in the face of highly uncertain conditions. Specifically, type-2 fuzzy sets provide a framework for the comprehensive capturing and modelling of uncertain data, which, together with approaches such as clustering and similarity measures (to name but two) provides strong capability for reasoning about and with uncertain information sources in a variety of contexts and applications. Type-2 fuzzy systems combine the potential of type-2 fuzzy sets with the strengths of rule-based inference in order to provide highly capable inference systems over uncertain data which remain white-box systems (i.e. interpretable).
Scope and Topics
The aim of this special session is to present and focus top quality research in the areas related to the underlying theory of type-2 fuzzy sets and systems. There are many open and unanswered questions about properties and nature of type-2 fuzzy sets and systems, this session is designed to provide a forum for the academic and industrial communities to report on advances in including, but are not limited to:
Organized by Derek T. Anderson, Chee Seng Chan and James M. Keller
Fuzzy set theory is the subject of intense investigation in fields like control theory, robotics, biomedical engineering, computing with words, knowledge discovery, remote sensing and socioeconomics, to name a few. However, in the area of computer vision, other fields, e.g., machine learning, and communities, e.g., PAMI, ICCV, CVPR, ECCV, NIPS, are arguably state-of-the-art. In particular, the vast majority of top performing techniques on public datasets are steeped in probability theory. Important questions to the fuzzy set community include the following. What is the role of fuzzy set theory in computer vision? Does fuzzy set theory make the most sense and biggest impact in terms of low-, mid- or high-level computer vision? Furthermore, do current performance measures favor machine learning approaches? Last, is there additional benefit that fuzzy set theory brings, and if so, how is it measured?
Scope and Topics
This special session invites new research in fuzzy set theory in computer vision. It is a follow up to the 2013 FUZZ-IEEE workshop View of Computer Vision Research and Challenges for the Fuzzy Set Community and Fuzzy Set Theory in Computer Vision special sessions in 2014 and 2015. In particular, we encourage authors to investigate their research using public datasets and to compare their results to both fuzzy and non-fuzzy methods. Topics of interest include all areas in computer vision and image/video understanding. Example topics include, but are definitely not limited to, the following:
Organized by Faa-Jeng Lin and Hong-Tzer Yang
Renewable power generation systems in general include wind, photovoltaic (PV), fuel cell and biomass power generation systems. They have been getting more attention
recently due to cost competitiveness and environment friendly, as compared to fossil fuel and nuclear power generations. Owing to the relatively higher investment cost
of renewable power generation systems, it is important to operate the systems near their maximum power output point, especially for the wind and solar PV generation
systems. Thus, maximum power point tracking (MPPT) techniques are often required. Moreover, since the wind and solar PV power resources are intermittent, accurate
predictions and modeling of wind speed and solar insolation are necessary, though difficult. Plus, to have a more reliable power supply, renewable power generation
systems are usually interconnected with the electrical network. As a result, modeling and controlling the electrical network using smart-grid techniques, such as smart
meter, micro-grid, and distribution automations become very important issues. On the other hand, due to the highly nonlinear and time-varying nature with unmodeling
dynamics, effective uses of computational intelligence techniques such as fuzzy systems for the controlling and modeling of renewable power generation in a smart-grid
system turn out to be very crucial for successful operations of the systems. Hence, topics of interest of the special session on Fuzzy Systems of Renewable Energy would
cover the whole range of researches and applications of fuzzy systems in renewable power generations and smart grid systems.
Scope and Topics
Topics of interest include, but are not limited to, the following:
Organized by Elpiniki I. Papageorgiou, Engin Yesil and Jose Salmeron
Fuzzy Cognitive Map is an extension of cognitive maps for modeling complex causal relationships easily, both qualitatively and quantitatively. As a Soft Computing technique
it is used for causal knowledge acquisition and providing causal knowledge reasoning process. FCMs modeling approach resembles human reasoning; it relies on the human expert
knowledge for a domain, making associations along generalized relationships between domain descriptors, concepts and conclusions. FCMs can be constructed from raw data as
well. FCMs model any real world system as a collection of concepts and causal relation among concepts. They combine fuzzy logic and recurrent neural networks inheriting their
main advantages. From an Artificial Intelligence perspective, FCMs are dynamic networks with learning capabilities, whereas more and more data is available to model the problem,
the system becomes better at adapting itself and reaching a solution. They gained momentum due to their dynamic characteristics and learning capabilities. These capabilities
make them essential for modeling and decision making tasks as they improve the performance of these tasks.
During the past decade, FCMs played a vital role in the applications of diverse scientific areas, such as social and political sciences, engineering, information technology,
robotics, expert systems, medicine, education, prediction, environment etc.
Scope and Topics
This special session aims to present highly technical papers describing new FCM models and methodologies addressing any of the following specific topics: theoretical aspects, learning algorithms, innovative applications and FCMs extensions. Topics include, but are not limited to:
Organized by Yusuke Nojima, Rafael Alcalá and Hisao Ishibuchi
For more than two decades, evolutionary computation and various meta-heuristics have
frequently been used for fuzzy system design under the name of evolutionary fuzzy systems.
Their learning and adaptation capabilities enable structure and parameter optimization of fuzzy
systems for many kinds of machine learning tasks such as modeling, classification, and rule
mining. Their flexible frameworks also enable to handle multiple objectives like accuracy and
interpretability maximization and many kinds of data types like imbalanced, missing, and
privacy-preserving data sets. The aim of the session is to provide a forum to disseminate and
discuss recent and significant research efforts on Evolutionary Fuzzy Systems in order to deal
with current challenges on this topic.
Scope and Topics
The session is open to any high quality submission from researchers working at the particular intersection of evolutionary algorithms and fuzzy systems. The topics of this special session are as follows:
Organized by Nicolas Marin, Daniel Sanchez, Anna Wilbik and Rui Jorge Almeida
Linguistic summaries and descriptions of data aim to extract and represent knowledge in the form of a collection of natural language sentences.
The objective is to obtain a text, as if it was produced by a human expert, describing the most relevant aspects of data for a certain user in a
specific context. Automatic generations of data summaries have gained increased relevance with the advent of possibilities to store and acquire data
as well as relations between them. In this realm, not only specialized users (e.g. in decision support systems) are interested in this type of approach,
but non-specialized users also show interest in receiving understandable information that is supported by data. Linguistic summaries commonly use fuzzy
set theory to model linguistic variables and incorporate different forms of imprecision in a collection of natural language sentences. In many approaches
they can be considered as quantifier based sentences, hence linguistic summaries constitute a perfect application for new developments in the domain of
fuzzy quantifiers. Furthermore, linguistic summaries have been related to fuzzy rule systems.
Linguistic summaries and description of data is related to other research areas such as knowledge discovery in databases and intelligent data analysis,
flexible query answering systems for data, human-machine interaction, uncertainty management, heuristics and metaheuristics, natural language generation
or processing. More recently, this field has been related to different paradigms, namely the linguistic description of complex phenomena and computing
with words paradigms.
The objective of this special session is to provide a forum for researchers, from the above indicated areas, to present recent developments in linguistic
summarizes and description of data as well as discuss how these different approaches can complement each other for the task of building such systems.
Scope and Topics
Topics of interest include, but are not restricted to:
Organized by Jie Lu, Chin-Teng Lin, Guangquan Zhang, Farookh Khadeer Hussain, Vahid Behbood, Dianshuang Wu, Mahardhika Pratama and Mohsen Naderpour
The volume, variety, velocity, veracity and value of data and data communication are increasing exponentially. The “Five Vs” are the key features of
big data, and also the causes of inherent uncertainties in the representation, processing, and analysis of big data. Also, big data often contains a
significant amount of unstructured, uncertain and imprecise data.
Fuzzy sets, logic and systems enable us to efficiently and flexibly handle uncertainties in big data in a transparent way, thus enabling it to better
satisfy the needs of real world big data applications and improve the quality of organizational data-based decisions. Successful developments in this
area have appeared in many different aspects, such as fuzzy data analysis technique, fuzzy data inference methods and fuzzy machine learning. In particular,
the linguistic representation and processing power of fuzzy sets is a unique tool for bridging symbolic intelligence and numerical intelligence gracefully.
Hence, fuzzy techniques can help to extend machine learning in big data from the numerical data level to the knowledge rule level. It is therefore instructive
and vital to gather current trends and provide a high quality forum for the theoretical research results and practical development of fuzzy techniques in
handling uncertainties in big data.
This special session aims to offer a systematic overview of this new field and provides innovative approaches to handle various uncertainty issues in big
data presentation, processing and analysing by applying fuzzy sets, fuzzy logic, fuzzy systems, and other computational intelligent techniques.
Scope and Topics
The main topics of this special session include, but are not limited to, the following:
Organized by Humberto Bustince , Radko Mesiar, Javier Fernandez and Javier Montero
Since its introduction by Zadeh in 1965 it was clear that fuzzy theory was an extraordinary tool for representing human knowledge. Nevertheless,
L. Zadeh himself established in 1973 that sometimes, in decision-making processes, knowledge is better represented by means of some generalizations
of fuzzy sets. In the applied field, in particular, the success of the use of fuzzy set theory depends on the choice of the membership function that
we make. However, there are applications in which experts do not have precise knowledge of the membership function that should be taken. In these
cases, it is appropriate to represent the membership degree of each element to the fuzzy set by means of an interval. From these considerations
arises the extension of fuzzy sets called theory of Interval-valued Fuzzy Sets (IVFSs), that is, fuzzy sets such that the membership degree of each
element of the fuzzy set is given by a closed subinterval of the interval [0, 1].
The theory of interval-valued fuzzy sets has attracted a lot of interest since its origin and specially in last years, when some applications where
the use of intervals have allowed to improve the results of some well-known algorithms in classification or image processing, for instance.
Scope and Topics
This special session will be dedicated to theoretical and practical aspects of interval-valued fuzzy sets. We hope to bring together some of the leading experts in this field, as well as researchers interested in this field, to share their work. In particular, this session covers (but it is not limited to) the following topics:
Organized by Syoji Kobashi, Gerald Schaefer, Hiroharu Kawanaka and Atsushi Inoue
The purpose of this special session is to disseminate and discuss recent and significant research issues on how intelligent methodologies can be used to solve challenging problems related to medical, biomedical, and healthcare fields. This special session will be held under IEEE CIS Task Force of "Fuzzy Logic in Medical Sciences".
Scope and Topics
Organized by Valentina E. Balas, Tsung-Chih Lin, Rajeeb Dey, Yu-Chen Lin and Seshadhri Srinivasan
The aim of this special session is to present the state-of-the-art results in the area of adaptive intelligent control theory and applications and to get together researchers in this area. Adaptive control is a technique of applying some methods to obtain a model of the process and using this model to design a controller. Especially, fuzzy adaptive control has been an important area of active research. Significant developments have been seen, including theoretical success and practical design. One of the reasons for the rapid growth of fuzzy adaptive control is its ability to control plants with uncertainties during its operation.
Scope and Topics
The papers in this special session present the most advanced techniques and algorithms of adaptive control. These include various robust techniques, performance enhancement techniques, techniques with less a-priori knowledge and nonlinear intelligent adaptive control techniques. This special session aims to provide an opportunity for international researchers to share and review recent advances in the foundations, integration architectures and applications of hybrid and adaptive systems. Topics of interest include, but are not limited to:
Organized by Enrique Herrera-Viedma, Francisco Chiclana, Yucheng Dong and Francisco Javier Cabrerizo
The development of formal mathematical models to support experts in making decisions is of great importance to assure the validity of the actions derived from a decision outcome is theoretically sound. This is of special relevance in decision contexts where the information on the problem at hand is not amenable to be modelled in a quantitative and precise way. Another issue to be addressed is that of inconsistency of information and the dynamic nature of the decision making process itself. This type of decision-making is now being described as decision-making under uncertainty in inconsistent and dynamic environments.
Scope and Topics
This special session aims at gathering researchers with an interest in the research area described above. Specifically, we are interested in contributions towards the development of consensus models for such decision-making problems, as well as formal approaches that are able to support incomplete or missing information.
Contributions to this special session are expected to pay special attention to the rigorous motivation of the approaches put forward and to support all aspects of the models developed with a corresponding theoretical sound framework. Straight approaches lacking such scientific approach are discouraged.
Indicative, but not complete, lists of topics covered in this focus session include:
Organized by Valentina E. Balas, Camelia Pintea, Ahmad Taher Azar, Mario Pavone, Rabie A. Ramadan and Nicolaie Popescu-Bodorin
In nature there are many examples that could help humanity to develop new projects, to improve and to solve some real life complex problems. The Bio-inspired Fuzzy Systems have the ability to include both natural computing and real life coefficients of uncertainty to keep in balance the solutions of the large-scale static and dynamic problems. The strategies of natural organisms (as ants, bees, nano-bots, swarms, flocks etc.) include adaptation and learning based on environmental changes, incomplete input information and the presence of noise. That is why Artificial Intelligence uses bio-inspired techniques, like ant colonies, artificial immune systems, swarm intelligence, neural networks, evolutionary computation, and not at last fuzzy logic to solve difficult problems.
Scope and Topics
The aim of this special session is provide an opportunity for international researchers to share and review recent advances in the foundations, integration architectures, and applications of Bio-inspired Fuzzy Logic systems in Pattern Recognition, Bioinformatics and computational biology, Healthcare, Industry, Microelectronics, Transportation, Green Logistics, Social Network, Web services, Cloud Computing and other domains. The topics of interest include, but are not limited to:
Organized by Nilanjan Dey, Amira S. Ashour, Dana Balas Timar and Valentina Emilia Balas,
Recently, the researchers’ intensive focus is attracted to medical image analysis studies. Neural network applications in computer-aided diagnosis (CAD) signify the foremost stream of computational intelligence in medical imaging. Moreover, neural networks are capable to optimize the inputs/ outputs relationship via distributed computing, training, and processing. This leads to reliable desired solutions by specifications, and medical diagnosis.
In the medical domain, the relation between accurate diagnosis and treatment can be assessed. Through medical imaging modalities physicians are able to collect/ measure information in the form of signals and/ or images that replicate the anatomical structure as well as the human body function. This field takes compensation of computer progress. In sake of effective diagnosis, computer aided systems become a must to construe and combine the acquired images for the purpose of diagnosis and intervention.
In spite of the success achieved by neural network in the medical domain, constructing multilayer neural networks includes challenging optimization problems.This session focuses on computational intelligence with neural networks covering medical image segmentation, registration, and edge detection for medical image analysis. In addition, computer-aided detection/ diagnosis with precise coverage on cancer screening, and other applications gives a global view on the variety of neural network applications and their potential for further research and developments.
Consequently, this special session is designed to allow the researchers, designers and developers to publish innovative and state-of-the art algorithms and architectures for medical image analysis based neural network with computational computing techniques.
Scope and Topics
The aim of this special session is to explore the existing neural network with computational intelligence to develop new efficient algorithms, to discussion of the existing problems/ challenges, and to propose solutions, and to collaborate for promising future research direction in the medical domain analysis based computational computing and machine learning. The topics of interest include:
Organized by Scott Dick
Complex fuzzy sets are an extension to type-1 fuzzy sets in which membership grades are complex-valued. Likewise, complex fuzzy logic is an isomorphic family of multi-valued logics whose truth values are complex numbers. In the ten years since these concepts were first proposed, further theoretical investigations and a number of applications have made complex fuzzy sets and logic a lively and growing research area.
Scope and Topics
This special session will provide a forum to consolidate the community of researchers in this area, share our current ideas, reflect on future directions, and communicate our ideas and vision to the larger Computational Intelligence community. As such, we welcome submissions on all aspects of complex fuzzy sets or complex fuzzy logic, including but not limited to:
Organized by Mohammad H. Fazel Zarandi, Jerry Mendel and Burhan Turksen
In many real world problems, we encounter high uncertain information and knowledge based on which decision making should be considered. In such situations, type-1 or interval type-2 fuzzy set theory can be used to model and solve problems with vague information and knowledge. In some problems, the information is too vague to model the problem with either type-1 or intervalvalued type-2 fuzzy sets, so full type-2 or higher level fuzzy sets are used to model these systems. In full type-2 fuzzy sets, each element is represented by two memberships, which are named primary and secondary memberships. This fact shows the capability of full type-2 fuzzy sets in containing and representing more information than type-1 or interval fuzzy sets. Because of that capability, real problems with higher degree of uncertainty are solvable. Hence, there is an increasing need to do more research in the area of type-2 or higher level fuzzy systems and modeling to manage very uncertain problems.
Scope and Topics
Regarding to the increasing need for developing type-2 or higher level fuzzy systems, this session welcomes the researchers and papers in the area of theory and applications of type-2 and higher level fuzzy systems. The topics of this session include but are not limited to the following areas:
Organized by Aminah Robinson Fayek and Chrysostomos Stylios
Construction engineering and management research has seen significant growth in fuzzy logic and computational intelligence applications to solve numerous problems. Fuzzy logic and computational intelligence have been used to model subjective information, handle uncertainty, and address the lack of comprehensive data sets available for modeling in construction engineering and management. In the construction domain, fuzzy logic has been combined with other soft computing techniques and computational intelligence methods to model, simulate, and create hybrid dynamic systems. This session will focus on recent advances and applications of fuzzy logic and computational intelligence techniques for applications related to planning and scheduling, estimating and bidding, productivity, organization competency, project control, structuring projects, process improvement, risk analysis, and others. In particular, challenges related to applying fuzzy logic in the construction engineering and management domain will be discussed and ideas generated on how to adapt fuzzy logic and fuzzy hybrid techniques to better suit construction applications.
Scope and Topics
The main topics of this special session include, but are not limited to, the application of the following approaches to construction engineering:
Organized by Mikel Galar, Bartosz Krawczyk and Isaac Triguero
The aim of this special session is to serve as a forum for the exchange of ideas and
discussions on recent and new trends regarding intersections between fuzzy systems
and machine learning methods. Machine learning is a very active research field
because of the huge number of real-world applications that can be addressed by this
field of research. There are many contemporary problems, besides the canonical
classification, regression or clustering, that require special focus and development of
novel and efficient solutions. Such challenges include the problem of imbalanced data,
learning on the basis of low quality and noisy examples, multi-label and multi-instance
problems, or having limited access to object labels at the training phase, among others.
Learning methods based on Soft Computing techniques are widely used to face the
aforementioned challenges with promising results. Fuzzy systems have demonstrated
the ability to provide at the same time interpretable models understandable by human
beings, as well as highly accurate results. Moreover, fuzzy-based techniques are of
great interest when dealing with low quality or noisy data as they provide a framework
to manage uncertainty. Evolutionary computation is a robust technique for optimization,
learning and preprocessing tasks. They can adapt the model parameters for each
problem to obtain a highly accurate system forming a good synergy with fuzzy
approaches.
We encourage authors to submit original papers as well as preliminary and promising
works in the topics of this special session.
Scope and Topics
The aim of the session is to provide a forum for the exchange of ideas and discussions on Soft Computing techniques and algorithms for machine learning, in order to deal with the current challenges in this topic. The special session is therefore open to high quality submissions from researchers working in learning problems using soft computing techniques. The topics of this special session include fuzzy models for handling data-level difficulties and improving machine learning methods in areas such as:
Organized by Jun Yoneyama and Zsofia Lendek
The aim of this special session is to present the state-of-the-art results in the area of theory and applications of fuzzy control system design and analysis, and to get together well-known and potential researchers in this area. Fuzzy control system design and analysis provide a systematic and efficient approach to controlling of nonlinear plants and analysis of nonlinear control systems. Fuzzy control system has been employed to deal with a wide range of nonlinear control systems. A number of results on this area have appeared in the literature. However, there is still room for improvement of the existing results in order to propose new techniques for control of nonlinear systems. In the proposed special session, the focus is mainly on the fuzzy control system design and analysis with emphasis on the theory and applications. The important problems and difficulties on the fuzzy control systems will be addressed, their concepts will be provided and methodologies will be proposed to take care of the nonlinear systems using the fuzzy control system approaches.
Scope and Topics
The main topics of this special session include, but are not limited to:
Organized by Luka Eciolaza and George Panoutsos
Modern manufacturing environments are evolving considerably in
order to adopt new ICT technologies and exploit their full potential to
develop the so called factories of the future. Some of the main
objectives consist on: (i) making sustainable manufacturing processes
(highly efficient, productive, quality and accurate adaptive production
processes), (ii) integrating human expert knowledge with the
technology (iii) reducing the use of resources and generation of waste,
(iv) opening new markets. The use of digital technologies throughout
the manufacturing value chain plays a key role in order to achieve
these goals.
Advanced Manufacturing implies an advanced degree of automation,
autonomy and digitization within industrial processes and factories.
Thus, advances in electronics and information technologies are
considered key enabling technologies which are driving the
transformation of current manufacturing systems towards the so called
“Intelligent Factories”. The volume of data generated and archived in
the manufacturing processes (design, simulation, monitoring, quality
control, maintenance, etc.), represents a rich source of information
which could potentially provide deep insight into the underlying
physical processes and could also be used for process optimization.
However, acquired process information is usually heterogeneous,
complex, and with various degrees of uncertainty. Thus, intelligent data
processing and analysis is an essential mechanism in order to extract
useful knowledge models for their use in decision making.
Scope and Topics
The goal of this special session is to provide an insight into state of the art use of fuzzy logic based solutions in advanced manufacturing environments. These solutions should target mainly applications of: product design optimization, new manufacturing architectures for flexible manufacturing, product lifecycle management (PLM), zero defect manufacturing, additive manufacturing, maintenance services, computer aided monitoring and quality nondestructive testing (NDT), collaborative manufacturing environments. Potential topics of interest include but are not limited to:
Organized by Giovanni Acampora, Chang-Shing Lee, Trevor Martin and Marek Reformat
Web intelligence is the area of scientific research and development that explores the roles and makes use
of artificial intelligence and information technology methodologies for enabling the design and implementation of new products,
services and frameworks that are empowered by the World Wide Web. In particular, Web intelligence achieves this goal
through a combination of digital analytics, which examines how website visitors view and interact with a site’s pages and
features, and business intelligence, which allows a corporation’s management to use data on customer purchasing patterns,
demographics, and demand trends to make effective strategic decisions. As an example, search engines are one of the
Internet applications that better benefit from this innovative method. Thanks to the aforementioned combination of
technologies, Web Intelligence enables the implementation of enhanced systems aimed at improving users' experience in
using and manipulating web resources, and companies' activities in deploying profiled and personalised contents and
services. However, the imprecise and vague nature of World Wide Web, due to the large amount of information online and the
different types of interaction that users and companies can have with this information, requires a new vision of web
intelligence in which the treatment of uncertainty is a key factor: Fuzzy Web Intelligence. Indeed, recent literature review
suggests that more and more successful developments in Web Intelligence are being integrated with fuzzy sets to enhance
smart functionality such as web search systems by fuzzy matching, Internet shopping systems using fuzzy multi-agents,
product recommender systems supported by fuzzy measure algorithms, e-logistics systems using fuzzy optimisation models;
online customer segments using fuzzy data mining, fuzzy case-based reasoning in e-learning systems, and particularly online
decision support systems supported by fuzzy set techniques. In light of the these observations, this special session is intended
to form an international forum presenting innovative developments of fuzzy set applications in Web-based support systems.
The ultimate objective is to bring well-focused high quality research results in Fuzzy Web Intelligence systems with intent to
identify the most promising avenues, report the main results and promote the visibility and relevance of fuzzy sets.
Scope and Topics
The main topics of this special session include, but are not limited to:
Organized by Mohammad H. Fazel Zarandi, Oscar Castillo , Burhan Turksen and Behshad Lahijanian
In recent years, “Big Data” and “Data Mining” have become new ubiquitous terms.
Big data and data mining are transforming science, engineering, medicine,
healthcare, finance, business, and ultimately society itself. On the other hand,
pattern recognition focuses on the recognition and regularities in data and tries to
classify observations. Classification, data clustering, regression, sequence labeling,
and parsing, etc. are some pattern recognition methods.
By consideration of pattern recognition techniques, data processing and making
intelligent decisions on different area has been facilitated because of its capability
of discovering patterns from data, there is an increasing need to do more research
in the area of pattern recognition and data mining to handle complex problems.
Regarding to the increasing need for developing pattern recognition techniques to
manage the complexity of systems.This session welcomes researchers and papers
in the different areas of theory and applications of pattern recognition, data mining,
intelligent agents, etc.
Scope and Topics
The main topics of this special session include, but are not limited to:
Organized by Luis Martínez, Rosa M. Rodríguez and Francisco Herrera
Decision Making is an inherent mankind task related to intelligent and complex
activities in which human beings face situations where they must choose among
different alternatives by means of reasoning and mental processes. Such
decision situations usually involve different types of uncertainty according to
their nature. The fusion of information can reduce uncertainty and facilitate the
decision making process because it associates, correlates and combines
information from multiple sources to provide a relevant and timely view of the
situation.
Therefore, information fusion in decision making has been widely studied from
different points of view according to the framework in which it should be
developed. However, there are still different open challenging problems related
to information fusion and decision making because of the necessity of dealing
with either novel decision making problems with new types of uncertainty and
their modelling or with the advances in information fusion that imply
improvements regarding previous approaches.
Additionally, many real decision situations are defined under uncertain
contexts with imprecise information, in which it is straightforward the use of
linguistic information. Fuzzy linguistic approach based models and Computing
with Words (CW) provides the tools and methodology to deal with words. CW
emulates human cognitive processes to improve decision solving processes
under uncertainty. Consequently, information fusion processes, fuzzy linguistic
approach and CW have been applied as modelling and computational basis for
linguistic decision making, because it provides tools close to human beings
reasoning processes related to decision making, which improve and facilitate the
resolution of decision making under uncertainty as linguistic decision making.
All Information Fusion, Decision Making, Fuzzy Linguistic Approach and
Computing with Words have recently attracted much attention in which, novel
mathematical foundations and new decision models raised to be applied in
different decision fields such as multi-criteria decision making, decision analysis,
evaluation processes, consensus reaching processes, etc.
Scope and Topics
This invited session aims at providing an opportunity for researchers working in both research areas to discuss and to share their new ideas, original research results and practical experiences. More specifically, we expect you to have any contribution with the focus on the use of linguistic modelling in decision making. The topics of this special session are as follows:
Organized by Jaroslav Ramik, Radomir Perzina, Elena Mielcova, Jiri Mazurek, Hana Tomaskova and Richard Cimler
Decision analysis based on uncertain data is natural in many real-world applications, and
sometimes such an analysis is inevitable. In the past years, researchers have proposed many
efficient operations research models and methods, which have been widely applied to real-life
problems, such as finance, management, manufacturing, supply chain, transportation, among
others. This special session aims to provide a forum for advancing the analysis,
understanding, development, and practice of uncertainty theory and operations research for
solving economic, engineering, management, and social problems
Scope and Topics
The goal of this special session is to provide an excellent forum for the discussion of the latest methods of operations research. The scope of the session will be focused on development of new methods of multiple criteria decision making under conditions of uncertainty and risk based on possibility theory and fuzzy sets theory. A new theory based on fuzzy relations and duality principle will be welcome. We invite the submission of high-quality, original and unpublished papers in this area. Interesting applications are welcome. The topics of interest include, but are not limited to:
Organized by Swati Aggarwal,
Real life problems often calls for decision making under uncertainty—meaning we have to make a choice based on incomplete and indeterminate input data, often with unknown outcomes too. Researchers experimenting in the field of automated decision support systems make provisions of handling different types and potential sources of uncertainty whilst harmonizing the numerous aims of the system.
Real world problems have been effectively modeled using fuzzy logic that gives suitable representation of real-world data/information and enables reasoning that is approximate in nature. It is quite uncommon that the inputs captured by the fuzzy models are 100% complete and determinate. Though, humans can take intelligent decisions in such situations but fuzzy models require complete information. Incompleteness and indeterminacy in the data can arise from inherent non-linearity, time-varying nature of the process to be controlled, large unpredictable environmental disturbances, degrading sensors or other difficulties in obtaining precise and reliable measurements. Neutrosophic logic is an extended and general framework for measuring the truth, indeterminacy and falsehood-ness of the information. It is effective in representing different attributes of information like inaccuracy, incompleteness and ambiguous, thus giving fair estimate about the reliability of information.
This special session aims to provide a platform, where researchers coming from academia and industry can exhibit the varied practices of handling uncertainty in varied domains through the concepts of Neutrosophic Logic, communicate the connections amongst procedure and practice, and explain the contemporary case studies in different areas of application.
Scope and Topics
Topics of interest include, but are not limited to:
Organized by Susan Bastani, Mohammad Hossein Fazel Zarandi, Jerry Mendel and Mansoureh Naderipour
Nowadays, social networks analysis is one of the main research subjects in computational
intelligence, computer science, and sociology. Its importance is growing every day with
expansion of social media, networks, and technological advancement.
Social networks, especially those one that have typical and non-commercial applications, are
places in the virtual world that introduce their people briefly and provide possibility of
communication between themselves and their adherents in the various interest areas. Obviously,
virtual social networks will become more important and popular in the future. With social
networks, persons are not alone to find their adherents in various cases.
Determining and predicting communications within a network is the main interest of social
networks scientists and researchers. In the real world social networks, the communications are
not usually defined crisply. In other words, the human communications usually encounters with
imprecision and vagueness. Fuzzy theory, specially type-2 fuzzy logic, is very powerful
approach to model social networks and analysis different ties (strong or weak) between nodes of
the graphs. Regarding the increasing need for developing fuzzy topics in Social networks, this
session welcomes the researchers and papers in the area of theory and applications of fuzzy
theory (type-1 and type-2) in Social networks.
Scope and Topics
The topics of this session include but are not limited to the following areas:
Organized by Rui Wang, Sanaz Mostaghim, Tao Zhang and Shengxi Yang
Due to the rapid industrialization and the scarcity of conventional energy resources such as coal and
natural gas, it has become increasingly urgent to find effective and efficient ways for energy use. The
“Energy Internet System (EIS)” is a peer to peer energy exchange and sharing network which
effectively integrates different energy sources together, including both conventional energy
resources and renewable energy sources like solar and wind, and has become a promising solution.
However, there are various optimization issues existed in EISs. For example, the optimal structure
design of the EIS, the optimal control and management of energy exchange, and the optimal
scheduling of energy flow among different nodes. Moreover, hybrid renewable energy systems are
often used in an EIS. The design of HRES is effectively a multi-objective optimization problem, that is,
multiple objectives (such as the lifetime system cost, carbon emissions, and the system reliability)
that are to be optimized. Therefore, the need for researchers from both optimization side and energy
side to develop more effective and efficient methods to tackle issues arise in Energy Internet Systems
has become apparent.
The main aim of this special session is to bring together both experts and new-comers from either
academia or industry to discuss new and existing optimization issues in an EIS, in particular, to cross-
fertilizate between academic research and industry applications, and to stimulate further
engagement with the user community.
Scope and Topics
Full papers are invited on recent advances in the development of EISs, new horizons, i.e., using multi- criteria decision making methods, for EIS design and/or management. In addition, we are interested in various studies discussing optimization issues in EISs or related real-world applications. You are invited to submit papers that are unpublished original work for this special session. The topics include, but are not limited to:
Organized by Carsten Mueller, Markus Brenkner and Andre Hofmeister
A global acting logistic company uses metaheuristics to optimize the time-consuming and complex
computation of transport paths. The company is worldwide connected and has different locations
in Hamburg, St. Petersburg, New York, Hong Kong and Shanghai. Sophisticated employees
continuously improve the computation of transport paths and evolve new algorithms.
A flexible solution called "Intelligent Evaluation Of Complex Algorithms" (IEOCA), which facilitate
the collaboration and intelligent evaluation of existing and newly generated algorithm, is installed
on a secure cloud platform in Hamburg. IEOCA provides a highly flexible, scalable and component
based three-layer-architecture. These layers are protected with secure X.509 certificates and build
the base of a trusted company network. Furthermore, the layers are linked through dynamically
configurable service channels and ensure an extremely high-performance data exchange.
Every algorithm is encapsulated as modular component and attached to the cloud platform for
evaluation purpose. These components are based on interfaces and easy to develop as well
as to maintain. A team of experts evaluates the quality of the algorithms with fixed methods and
expensive reports. In addition, IEOCA provides an automatic evaluation monitor. This complex
monitor observes the performance of each algorithm i.e. runtime, memory usage or result and
replace naturally worse components.
The IEOCA framework consumes information straightforward from the productive system via an
extremely fast and lightweight communication channel. This mechanism allows the company to
generate the maximum of economic benefit and improvements directly affect the daily business.
This workshop shows in an impressive use case the implemented platform IEOCA and provides
an interesting insight in the architecture as well as in the performance to the participants.
Scope and Topics
The proposed special session aims to bring together theories and applications of a dynamic component-based software architecture to the intelligent evaluation of complex algorithms. Topics of interest include, but are not limited to:
Organized by Helio J. C. Barbosa, Yong Wang and Efren Mezura-Montes
In their original versions, nature-inspired algorithms for optimization such as evolutionary
algorithms (EAs) and swarm intelligence algorithms (SIAs) are designed to sample unconstrained
search spaces. Therefore, a considerable amount of research has been dedicated to adapt them to
deal with constrained search spaces. The objective of the session is to present the most recent
advances in constrained optimization for single-, multi-, and many-objective optimization, using
different nature-inspired techniques.
Scope and Topics
The session seeks to promote the discussion and presentation of novel works related with (but not limited to) the following issues:
Organized by Mengjie Zhang, Vic Ciesielski and Mario Koppen
Computer vision is a major unsolved problem in computer science and engineering. Over the last decade
there has been increasing interest in using evolutionary computation approaches to solve vision problems.
Computer vision provides a range of problems of varying difficulty for the development and testing of
evolutionary algorithms. There have been a relatively large number of papers in evolutionary computer
vision in recent CEC and GECCO conferences. It would be beneficial to researchers to have these papers
in a special session. Also, a special session would encourage more researchers to continue to work in
this field and consider CEC a place for presenting their work.
Scope and Topics
The proposed special session aims to bring together theories and applications of evolutionary computation
to computer vision and image processing problems. Topics of interest include, but are not limited to:
New theories and methods in different EC paradigms for computer vision and image processing including
Organized by Su Nguyen, Yi Mei and Mengjie Zhang
Evolutionary Scheduling and Combinatorial Optimization is an active research area in both Artificial Intelligence and Operations Research due to its applicability and
interesting computational aspects. Evolutionary techniques are suitable for these problems since they are highly flexible in terms of handling constraints, dynamic
changes and multiple conflicting objectives.
Scope and Topics
This special session focuses on both theoretical and practical aspects of Evolutionary Scheduling and Combinatorial Optimization. Examples of evolutionary methods include genetic algorithm, genetic programming, evolutionary strategies, ant colony optimisation, particle swarm optimisation, evolutionary based hyper-heuristics, memetic algorithms.
Organized by Bing Xue, Mengjie Zhang and Yaochu Jin
Many data mining and machine learning problems involve a large number of features/attributes, which leads to “the curse of dimensionality”.
However, not all the features are essential since many of them are redundant or even irrelevant, and the “useful” features are typically not
equally important. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature
construction to create a smaller set of high-level features using the original low-level features and mathematical or logical operators.
Feature selection and construction are challenging tasks due to the large search space and feature interaction problems. Recently, there has
been increasing interest in using evolutionary computation techniques to solve feature selection and construction tasks.
Scope and Topics
The theme of this special session is the use of evolutionary computation for feature reduction, covering ALL different evolutionary computation
paradigms. The aim is to investigate both the new theories and methods in different evolutionary computation paradigms to feature
reduction, and the applications of evolutionary computation for feature reduction. Authors are invited to submit their original and unpublished work
to this special session.
Topics of interest include but are not limited to:
Organized by Ankur Sinha and Kalyanmoy Deb
Bilevel optimization problems are special kind of optimization problems that involve two levels of
optimization, namely upper level and lower level. The hierarchical structure of the problem requires
that every feasible solution to the upper level problem should satisfy the optimality conditions of
the lower level problem. Such a requirement makes bilevel optimization problems difficult to solve.
These problems are commonly found in many practical problem solving tasks, which include
optimal control, process optimization, game-playing strategy development, transportation problems,
coordination of multi-divisional firms, machine learning and others. Due to the computation
expense and other difficulties involved in handling such problems, they are often handled using
approximate solution procedures. There is a need for theoretical as well as methodological
advancements to handle such problems efficiently.
Scope and Topics
The special session on Bilevel Optimization will focus on the following topics:
Organized by Hiroyuki Sato and Antonio Lopez Jaimes
Evolutionary algorithms are particularly suited to solve multi-objective optimization problems since they can
obtain a set of non-dominated solutions to approximate Pareto front in a single run of the algorithm. So far,
multi-objective EAs have been successfully applied mostly in two and three objectives problems. However,
multi-objective EAs face several difficulties when we try to solve many-objective optimization problems,
which optimize four or more objective functions simultaneously. At least, the following difficulties have been
recognized in recent researches of evolutionary many-objective optimization.
(1) The convergence deterioration of solutions toward Pareto front
(2) The approximation of high dimensional entire Pareto front with a limited number of solutions in the
population
(3) The presentation of obtained solutions in the high dimensional objective space and the decision making
of a single final solution from them
(4) The search performance evaluation of search algorithms
Scope and Topics
This special session will focus on evolutionary many-objective optimization to tackle problems in many-objective optimization including the above mentioned difficulties and the below few topic (but not limited to):.
Organized by Yong Wang, Zixing Cai, Qingfu Zhang and Crina Grosan
Nonlinear equation systems (NESs) frequently arise in many physical, electronic, and mechanical processes. Very often, a NES may
contain multiple optimal solutions. Since all these optimal solutions are important for a given NES in the real-world applications,
it is desirable to simultaneously locate them in a single run, such that the decision maker can select one final solution which matches
at most his/her preference.
For solving NESs, several classical methods, such as Newton-type methods, have been proposed. However, these methods have some disadvantages
in the sense that they are heavily dependent on the starting point of the iterative process, can easily get trapped in a local optimal solution,
and require derivative information. Moreover, these methods aim at locating just one optimal solution rather than multiple optimal solutions when
solving NESs. During the past decade, evolutionary algorithms (EAs) have been widely applied to solve NESs due to the fact that EAs are insensitive
to the shapes of the objective function and easy to implement.
Solving NESs by EAs is a very important area in the community of evolutionary computation, which is challenging and of practical interest. However,
systematic work in this area is still very limited. This is the first special issue in the IEEE CEC to facilitate the development of EAs for NESs.
Scope and Topics
The topics of this special issue include (but are not limited to):
Organized by Massimiliano Vasile, Chit Hong Yam, Victor Becerra and Edmondo Minisci
In an expanding world with limited resources and increasing complexity, optimisation
and computational intelligence become a necessity. Optimisation can turn a problem into
a solution and computational intelligence can offer new solutions to effectively make
complexity manageable.
All this is particularly true in space and aerospace where complex systems need to
operate optimally often in harsh and inhospitable environment with high level of
reliability. In Space and Aerospace Sciences, many applications require the solution of
global single and/or multi-objective optimization problems, including mixed variables,
multi-modal and non-differentiable quantities. From global trajectory optimization to
multidisciplinary aircraft and spacecraft design, from planning and scheduling for
autonomous vehicles to the synthesis of robust controllers for airplanes or satellites,
computational intelligence (CI) techniques have become an important – and in many
cases inevitable – tool for tackling these kinds of problems, providing useful and non-
intuitive solutions. Not only have Aerospace Sciences paved the way for the ubiquitous
application of computational intelligence, but moreover, they have also led to the
development of new approaches and methods.
In the last two decades, evolutionary computing, fuzzy logic, bio-inspired computing,
artificial neural networks, swarm intelligence and other computational intelligence
techniques have been used to find optimal trajectories, design optimal constellations or
formations, evolve hardware, design robust and optimal aerospace systems (e.g. reusable
launch vehicles, re-entry vehicles, etc.), evolve scheduled plans for unmanned aerial
vehicles, improve aerodynamic design (e.g. airfoil and vehicle shape), optimize
structures, improve the control of aerospace vehicles, regulate air traffic, etc.
Scope and Topics
This special session intends to collect many, diverse efforts made in the application of
computational intelligence techniques, or related methods, to aerospace problems. The
session seeks to bring together researchers from around the globe for a stimulating
discussion on recent advances in evolutionary methods for the solution of space and
aerospace problems.
In particular evolutionary methods specifically devised, adapted or tailored to address
problems in space and aerospace applications or evolutionary methods that were
demonstrated to be particularly effective at solving aerospace related problems are
welcome.
Organized by Nelishia Pillay and Rong Qu
Designing metaheuristics to solve problems can be time consuming, requiring many man hours.
This involves making a number of design decisions such as parameter tuning, identifying moves
or operators to use, deciding on the control flow of the algorithm or determining which low-level
construction heuristics to use in the case of combinatorial optimization problems. In some cases
it may be necessary to create new operators or algorithms or hybridize different metaheuristics
to solve a problem. Hyper-heuristics and adaptive metaheuristics have proven to be effective for
making some of these design decisions, thereby facilitating automated design. Hyper-heuristics
have been successfully used for the selection and generation of low-level heuristics in solving
various combinatorial optimization problems including timetabling, vehicle routing, packing
problems amongst others and have also been applied to dynamic environments and
multiobjective optimization. More recent trends in hyper-heuristic research have focused on the
design of metaheuristics. Selection hyper-heuristics have been used for determining parameter
values, choice of operators and control flow in metaheuristics, e.g. evolutionary algorithms and
ant colonization, as well as for the hybridization of techniques, e.g. multiobjective evolutionary
algorithms, different metaheuristics. Generation hyper-heuristics have been employed to create
new operators for metaheuristics, e.g. selection and mutation operators. An emerging area in
hyper-heuristics is hyper-hyper-heuristics, i.e. using hyper-heuristics to generate or design
hyper-heuristics.
Scope and Topics
The aim of this special session is for researchers to present recent developments in the field thereby paving the way for future advancement. The main topics include but are not limited to:
Organized by Shi Cheng, Quande Qin, Yuhui Shi and Simone Ludwig
Swarm intelligence algorithm should have two kinds of ability: capability learning and capacity developing. The capacity developing
focuses on moving the algorithm’s search to the area(s) where higher search potential may be obtained, while the capability learning
focuses on its actually search from the current solution for single point based optimization algorithms and from the current population
for population-based swarm intelligence algorithms. The swarm intelligence algorithms with both capability learning and capacity developing
can be called as developmental swarm intelligence algorithms.
The capacity developing is a top-level learning or macro-level learning methodology. The capacity developing describes the learning ability
of an algorithm to adaptively change its parameters, structures, and/or its learning potential according to the search states of the problem
to be solved. In other words, the capacity developing is the search strength possessed by an algorithm. The capability learning is a bottom-level
learning or micro-level learning. The capability learning describes the ability for an algorithm to find better solution(s) from current solution(s)
with the learning capacity it possesses.
The Brain Storm Optimization (BSO) algorithm is a new kind of swarm intelligence, which is based on the collective behaviour of human being,
that is, the brainstorming process. It is natural to expect that an optimization algorithm based on human collective behaviour could be a better
optimization algorithm than existing swarm intelligence algorithms which are based on collective behaviour of simple insects, because human beings
are social animals and are the most intelligent animals in the world. The designed optimization algorithm will naturally have the capability of both
convergence and divergence.
The BSO algorithm is a good example of developmental swarm intelligence algorithm. A “good enough” optimum could be obtained through solution divergence
and convergence in the search space. In the BSO algorithm, the solutions are clustered into several categories, and the new solutions are generated by the
mutation of cluster or existing solutions. The capacity developing, i.e., the adaptation during the search, is another common feature of the BSO algorithms.
The BSO algorithm can be seen as a combination of swarm intelligence and data mining techniques. Every individual in the brain storm optimization algorithm
is not a solution to the problem to be optimized, but also a data point to reveal the landscapes of the problem. The swarm intelligence and data mining techniques
can be combined to produce benefits above and beyond what either method could achieve alone.
Scope and Topics
This special session aims at presenting the latest developments of BSO algorithm, as well as exchanging new ideas and discussing the future directions of developmental swarm intelligence. Original contributions that provide novel theories, frameworks, and applications to algorithms are very welcome for this Special Session. Potential topics include, but are not limited to:
Organized by Shi Cheng, Yuhui Shi, Yaochu Jin and Bin Li
Nowadays, big data has been attracting increasing attention from academia, industry and government. Big data is defined as
the dataset whose size is beyond the processing ability of typical databases or computers. Big data analytics is to automatically
extract knowledge from large amounts of data. It can be seen as mining or processing of massive data, and “useful” information can
be retrieved from large dataset. Big data analytics can be characterized by several properties, such as large volume, variety of
different sources, and fast increasing speed (velocity). It is of great interest to investigate the role of evolutionary computing
(EC) techniques, including evolutionary algorithms and swarm intelligence algorithms for the optimization and learning involving
big data, in particular, the ability of EC techniques to solve large scale, dynamic, and sometimes multi-objective big data
analytics problems.
Scope and Topics
This special session aims at presenting the latest developments of EC techniques for big data problems, as well as exchanging new ideas and discussing the future directions of EC for big data. Original contributions that provide novel theories, frameworks, and solutions to challenging problems of big data analytics are very welcome for this Special Session. Potential topics include, but are not limited to:
Organized by Ying Tan and Liangjun Ke
Big data contains huge amount of data and information and is worth researching in
depth. Big data, also known as massive data or mass data, referring to the amount of
data involved that are too great to be interpreted by a human. The Obama
administration invested nearly two hundred million US dollars on the program of "Big
Data Research and Development Initiative", aiming to protect the national security. In
addition, sociologists use the big data from social interaction network to analyze the
human behavior, communication methods.
However, the methods to process big data are ineffective. Currently, the suitable
technologies include A/B testing, crowdsourcing, data fusion and integration, genetic
algorithms, machine learning, natural language processing, signal processing,
simulation, time series analysis and visualization. But real or near-real time
information delivery is one of the defining characteristics of big data analytics. It is
important to find new methods to enhance the effectiveness of big data.
Fireworks algorithm (FWA) achieved a great success on solving many complex
optimization problems effectively. FWA has a unique search manner in the solution
space and is a strong capability to solve optimization problems. It has many effective
variants and huge amount of successful applications. Moreover, FWA is suitable for
parallelization and works significantly better than other SI algorithms, such as particle
swarm optimization, ant colony optimization and genetic algorithm.
The main aim of this special session is to bring together both experts and new-comers
from either academia or industry to discuss fireworks algorithm and its application,
especially on the big-data application. However, both the improvements and the
applications of FWA are welcome and acceptable for this special session.
Scope and Topics
Full papers are invited on recent advances in the development of FWA, i.e., FWA improvements and applications. In addition, we are interested in various studies on discussing processing big data issues by FWA. The session seeks to promote the discussion and presentation of novel works related with (but not limited to) the following issues:
Organized by Yang Yu, Ke Tang and Jose A. Lozano
Sophisticated optimization problems lay in many machine learning tasks.
These problems were commonly smartly relaxed as convex optimization
problems. Although the relaxation allows an efficient optimization using
mathematical programming methods, it often shifts the learning problem and
loses some important properties (e.g., convex loss functions may sensitive to
data noise). Evolutionary optimization provides a set of direct search tools
that make it possible to solve non-convex optimization problems for machine
learning. This special session intends to bring together researchers to report
their latest progress and exchange experience in solving machine learning
tasks better with evolutionary optimization methods.
Scope and Topics
The interest of this special session is on solving non-convex optimization problems in machine learning with the methodologies related to evolutionary optimization, such as evolutionary algorithms, swarm intelligence algorithms, cross-entropy methods, Bayesian optimization. The topics cover a broad range of machine learning tasks including (but not limited to):
Organized by Katherine M. Malan and Andries P. Engelbrecht
Since the notion of a fitness landscape was introduced by Sewell Wright in 1932, fitness landscapes have been studied by evolutionary biologists to better understand how evolution occurs in nature. In a similar way, researchers in evolutionary computation (EC) have used fitness landscapes to better understand the evolutionary process of search. Studies have ranged from theoretical models of fully enumerated combinatorial landscapes to the prediction of algorithm performance based on approximate fitness land- scape characteristics. Fitness landscape analysis is a growing field in the EC community, but research has been scattered in widely different publications and conferences. Research papers in fitness land- scapes are often incorporated into theoretical tracks of conferences, even when the research is focussed on the practical application of fitness landscape analysis. Alternatively, papers on fitness landscapes may appear alone in specific algorithm tracks, such as swarm intelligence or genetic programming.
The aim of this special session on fitness landscapes is to provide an opportunity to not only bring
fitness landscape analysis researchers together at CEC 2016, but also to publish the most recent work
in a dedicated track in the proceedings. In addition, the special session should be of interest to re-
searchers and practitioners interested in practically applying fitness landscape analysis techniques to
better understand problems and algorithm behaviour.
Scope and Topics
For this special session on fitness landscapes we invite researchers to submit unpublished work specifically focussing on the practice of fitness landscape analysis. Topics of interest include, but are not limited to:
Organized by Karthik Sindhya, Handing Wang, Markus Olhofer and Yaochu Jin
High fidelity CAE models result in very detailed and precise information on the system at hand
and offer a huge potential for the utilization of numerical optimization methods like evolutionary
computation. However a big challenge is the resulting high computational cost of the models
which often even hinders the application of evolutionary optimization and design methods. In
the literature various methods are proposed to tackle the problem like for example
approximation methods or surrogates which are used to reduce computationally expensive
optimization problems. However this involves many challenges in tailoring the
approximation/surrogate to each problem in question. The aim of this session is to bring together
researchers from evolutionary algorithms who deal with computationally expensive problems.
This is an ideal platform for researchers and practitioners to interact and present ideas for
handling computationally expensive problems.
Scope and Topics
The topics include (but not limited to):
Organized by Hussein A. Abbass and Kay Chen Tan
Big Optimization (BigOpt) is the term we coin to differentiate optimization problems
that rely on big data from classical large scale optimization. BigOpt problems involve
thousands of variables and are normally expected to hide trends.
This special session is organized in conjunction with the BigOpt competition. How-
ever, authors who do not contribute to the competition but have papers related to the
optimization of big data problems are also encouraged to submit to the special session.
Scope and Topics
In this special session, several aspects of Evolutionary Algorithm and Nature Inspired Algorithms design can be considered, but not limited to the following:
Organized by Xingyi Zhang, Ran Cheng, and Yaochu Jin
Solving many-objective optimization problems (MaOPs) has drawn increasing attention in the research community due to the fact that
MaOPs cannot be solved efficiently using traditional MOEAs developed for solving multi-objective optimization problems with two or
three objectives. Among various ideas adopted for many-objective optimization, one big challenge is to develop highly efficient and
effective non-dominated sorting methods and Pareto-based multi-objective evolutionary algorithms (MOEAs) for MaOPs. The aim of this
special session is to bring together researchers in the evolutionary computation community dedicated to solving MaOPs using non-dominated
sorting and Pareto-based approaches to MaOPs.
Scope and Topics
Topics of the special session include, but are not limited to
Organized by Zhun Fan, Xinye Cai, Chuan-Kang Ting and Qingfu Zhang
Many of the tasks carried out in data mining and machine learning, such as feature subset selection, associate rule mining, model
building, etc., can be transformed as optimization problems. Thus it is very natural that Evolutionary Computation (EC), has
been widely applied to these tasks in the fields of data mining (DM) and machine learning (ML), as an optimization technique.
On the other hand, EC is a class of population-based iterative algorithms, which generate abundant data about the search space, problem
feature and population information during the optimization process. Therefore, the data mining and machine learning techniques can also
be used to analyze these data for improving the performance of EC. A plethora of successful applications have been reported, including
the creation of new optimization paradigm such as Estimation of Distribution Algorithm, the adaptation of parameters or operators in an
algorithm, mining the external archive for promising search regions, etc.
However, there remain many open issues and opportunities that are continually emerging as intriguing challenges for bridging the gaps between
EC and DM. The aim of this special session is to serve as a forum for scientists in this field to exchange the latest advantages in theories, technologies,
and practice.
Scope and Topics
We invite researchers to submit their original and unpublished work related to, but not limited to, the following topics:
Organized by Ranjit Singh Chauhan
During the last few decades, there has been an enhancement of using Evolutionary Computation
(EC). Evolutionary Computation is a machine learning optimization and classification algorithm
which is roughly based on the mechanism of evolutions such as Biological genetics and Natural
selections. This session is to cover practical, theoretical and applied aspects of the engineering
design of signals, with emphasis on Digital signals using Evolutionary algorithms. The term
“signal” includes audio, video, speech, image, communication, geophysical, bio-medical,
musical, and other signals. This session will discuss the issues of filtering, coding, transmitting,
estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals
by analog or digital devices. Design, synthesis, integration, evaluation, standardization, and the
development of algorithms evaluated towards building digital filters will be the primary focus.
Scope and Topics
The main aim of this session is to discuss the trends of evolutionary computation for the design of digital filters.
Topic/Area covered may be followings:
Organized by Patricia A. Vargas, Dario Floreano, Joshua Auerbach, Micael Couceiro and Phil Husbands
Evolutionary Robotics (ER) aims to apply evolutionary computation techniques to automatically design the control and/or
hardware of both real and simulated autonomous robots. Its origins date back to the beginning of the nineties and since then
it has been attracting the interest of many research centres all over the world.
ER techniques are mostly inspired by existing biological architectures and Darwin’s principle of selective reproduction ofthe fittest.
Evolution has revealed that living creatures are able to accomplish complex tasks required for their survival, thus embodying cooperative,
competitive and adaptive behaviours.
Having an intrinsic interdisciplinary character, ER has been employed toward the development of many fields of research, among
which we can highlight neuroscience, cognitive science, evolutionary biology and robotics. Hence, the objective of this special session
is to assemble a set of high-quality original contributions that reflect and advance the state-of-the-art in the area of
Evolutionary Robotics, with an emphasis on the cross-fertilization between ER and the aforementioned research areas, ranging
from theroretical analysis to real-life applications.
Scope and Topics
Topics of interest include (but are not restricted to):
Organized by Liang Feng, Ferrante Neri and Yew-Soon Ong
Memetic Computing (MC) represents a broad generic framework using the notion of meme(s) as units of information encoded in
computational representations for the purpose of problem-solving. In the literature, MC has been successfully manifested as memetic
algorithm, where meme has been typically perceived as individual learning procedures, adaptive improvement procedures or local
search operators that enhance the capability of population based search algorithms. More recently, novel manifestations of meme
in the forms such as knowledge building-block, decision tree, artificial neural works, fuzzy system, graphs, etc., have also been
proposed for efficient problem-solving. These meme-inspired algorithms, frameworks and paradigms have demonstrated with considerable
success in various real-world applications.
Scope and Topics
The aim of this special session on memetic computing is to provide a forum for researchers in this field to exchange the latest advances in theories, technologies, and practice of memetic computing. The scope of this special session covers, but is not limited to:
Organized by Wei-Chang Yeh and Yew-Soon Ong
Evolutionary Computation has been shown to attain high quality solutions to difficult optimization problems in fields for which
exact and analytical methods do not perform well within tractable time, especially on big-scale problems, since the early 1990s.
The essential idea of Evolutionary algorithms lies in the use of simple agents that work together in leading to emergent global
behaviors that solve complex problems efficiently and effectively. In the recent years, there has been increasing interests to
create new Evolutionary Computation methodologies by extending from existing Genetic algorithm (GA), Memetic Algorithm (MA), Ant
Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) algorithms, Simplified Swarm Optimization
(SSO), and others, that better emulates the power of nature in addressing big-scale real world problems in the field of Operations
Research, Management Science and Decision Making. . The developed evolutionary algorithms are expected to be flexible to internal and
external changes, robust even when some individuals fail, decentralized and self-organized.
In spite of the significant amount of research on Evolutionary Computation, there remain many open issues and intriguing challenges
in addressing big-scale real world problems in the field of Operations Research, Management Science and Decision Making. The aims of
this special session are to demonstrate the current state-of-the-art concepts of Evolutionary Computation in the field of Operations
Research, Management Science and Decision Making, to reflect on the latest advances and showcase new directions in the area.
Scope and Topics
Authors are invited to submit their original and unpublished work in the areas including, but not limited to:
Organized by Xin-She Yang and Xingshi He
Nature-inspired algorithms, especially swarm intelligence based algorithms, have become very popular in optimization and
computational intelligence. New algorithms such as the bat algorithm and cuckoo search have demonstrated some distinct
advantages and have thus been applied in many areas such as engineering optimization and image segmentation. Though the
literature about the practical applications has expanded significantly in the last few years, theoretical studies lack behind.
[This special session will be the 2nd event of such topics, followed by the first successful event at CEC2015 in Japan.]
Scope and Topics
This special session intends to provide a timely platform to exchange ideas about new bio-inspired optimization algorithms, with emphasis on the following topics (but not limited to):
Organized by Hongmei He
Internet of Things (IoT) delivers new value by connecting People, Process and Data. It brings great
opportunities and is changing our life style. However, great opportunities also bring large risks that the IoT enabled
systems could be threaten with various cyber-attacks, crimes and terrorism, as the IoT enabled systems produce a
large cyber space. Hence, cyber security is particularly important in IoT enabled systems, and has raised much
attention of researchers and industry recently. Evolutionary Computation and other Computational Intelligence
techniques have been applied in various areas, such as computational biology, medical science, finance, engineering,
etc. Cyber Security will be another area, where we can explore the power of Evolutional Computation and other
Computational Intelligence techniques.
Scope and Topics
This special session will cover the following topics of Cyber Security enabled by Evolutionary Computation and other Computational Intelligence techniques, but not limited.
Organized by Michalis Mavrovouniotis, Changhe Li, Shengxiang Yang and Yinan Guo
Many real-world optimization problems are subject to dynamism and uncertainties that are often impossible to avoid in practice.
For instance, the fitness function is uncertain or noisy as a result of simulation/ measurement errors or approximation errors
(in the case where surrogates are used in place of the computationally expensive high fidelity fitness function). In addition,
the design variables or environmental conditions can be perturbed or they change over time.
The tools to solve these dynamic and uncertain optimization problems (DOP) should be flexible, able to tolerate uncertainties,
fast to allow reaction to changes and adaptive. Moreover, the objective of such tools is no longer to simply locate the global
optimum solution, but to continuously track the optimum in dynamic environments, or to find a robust solution that operates properly
in the presence of uncertainties.
The last decade has witnessed increasing research efforts on handling dynamic and uncertain optimization problems using evolutionary
algorithms and other metaheuristics, e.g., ant colony optimization, particle swarm optimization, artificial bee colony etc., and a variety
of methods have been reported across a broad range of application backgrounds.
Scope and Topics
This special session aims at bringing together researchers from both academia and industry to review the latest advances and explore future directions in this field. Topics of interest include but are not limited to:
Organized by Jing Liu and Maoguo Gong
The application of complex networks to evolutionary computation (EC) has received
considerable attention from the EC community in recent years. The most well‐known study should
be the attempt of using complex networks, such as small‐world networks and scale‐free networks,
as the potential population structures in evolutionary algorithms (EAs). Structured populations have
been proposed to as a means for improving the search properties because several researchers have
suggested that EAs populations might have structures endowed with spatial features, like many
natural populations. Moreover, empirical results suggest that using structured populations is often
beneficial owing to better diversity maintenance, formation of niches, and lower selection pressures
in the population favouring the slow spreading of solutions and relieving premature convergence
and stagnation. Moreover, the study of using complex networks to analyse fitness landscapes and
designing predictive problem difficulty measures is also attracting increasing attentions. On the
other hand, using EAs to solve problems related to complex networks, such as community detection,
is also a popular topic.
Scope and Topics
This special session seeks to bring together the researchers from around the globe for a creative discussion on recent advances and challenges in combining complex networks and EAs. The special session will focus on, but not limited to, the following topics:
Organized by Anirban Mukhopadhyay, Ujjwal Maulik and Sanghamitra Bandyopadhyay
Computational biology is coming out as an emerging field for application of evolutionary computation tools and techniques such as genetic algorithms,
genetic programming, differential evolution, particle swarm optimization, ant colony optimization and other related population-based metaheuristic techniques.
Many of the computational biology problems, such as sequence alignment, gene mapping, fragment assembly, phylogenetic analysis, microarray analysis, biological
network analysis and rational drug design can be posed as optimization problems. Therefore evolutionary computing techniques have been applied to these
problems over the last few decades as optimization tools. However, growing size and complexity of biological data are creating new issues and challenges
and it is becoming difficult to apply off-the-shelf techniques directly. These challenges include coping with large data size, handling many objective
functions, dealing with large number of features, incorporating biological knowledge in the models etc. The main aim of this special session is to bring
together the scientists and researchers of this field to exchange the latest advances in theories and experiments in this area.
Scope and Topics
Researchers are invited to submit original and unpublished works that deal with application of evolutionary computation techniques to the following and other related areas.
Organized by Richard Allmendinger, Daniel Ashlock and Sanaz Mostaghim
Bioinformatics and Bioengineering (BB) are interdisciplinary scientific fields involving many branches of computer science, engineering,
mathematics, and statistics. Bioinformatics is concerned with the development and application of computational methods for the modeling,
retrieving and analysis of biological data, whilst Bioengineering is the application of engineering techniques to biology so as to create
usable and economically viable products.
Bioinformatics and Bioengineering are relatively new fields in which many challenges and issues can be formulated as (single and multiobjective)
optimization problems. These problems span from traditional problems, such as the optimization of biochemical processes, construction of gene
regulatory networks, protein structure alignment and prediction, to more modern problems, such as directed evolution, drug design, experimental
design, and optimization of manufacturing processes, material and equipment.
The main aim of this special session is to bring together both experts and new-comers working on Optimization and Decision-Making in Bioinformatics
and Bioengineering (ODMBB) to discuss new and exciting issues in this area.
Scope and Topics
We encourage submission of papers describing new optimization strategies/challenges/applications/decision-making techniques in the area of BB. In addition, we are interested in application papers discussing the power and applicability of these novel methods to real-world problems in BB. You are invited to submit papers that are unpublished original work for this special session at IEEE CEC’16, which is part of the IEEE WCCI ‘16. The topics are, but not limited to, the following
Organized by Will Browne, Keiki Takadama, Yusuke Nojima, Masaya Nakata and Tim Kovacs
Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks
including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary
computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and
evolution to produce a system performance that cannot be achieved by either of these approaches alone. Historically, this research area was
called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations
such as fuzzy learning classifier systems (Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider
range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy
systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm
than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML
to show its prospects.
Scope and Topics
This special session follows the first successful special session (the largest session among the special sessions) held in CEC 2015. The continuous exploration of this field by organizing the special session in CEC is indispensable to establish the discipline of EML. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:
Organized by Kai Qin, Kenneth V. Price, Swagatam Das and Jouni Lampinen
Differential evolution (DE) emerged as a simple and powerful stochastic real-parameter optimizer
more than a decade ago and has now developed into one of the most promising research areas in
the field of evolutionary computation. The success of DE has been ubiquitously evidenced in
various problem domains, e.g., continuous, combinatorial, mixed continuous-discrete,
single-objective, multi-objective, constrained, large-scale, multimodal, dynamic and uncertain
optimization problems. Furthermore, the remarkable efficacy of DE in real-world applications
significantly boosts its popularity.
Over the past decades, numerous studies on DE have been carried out to improve the performance
of DE, to give a theoretical explanation of the behavior of DE, to apply DE and its derivatives to
solve various scientific and engineering problems, as demonstrated by a huge number of research
publications on DE in the forms of monographs, edited volumes and archival articles.
Consequently, DE related algorithms have frequently demonstrated superior performance in
challenging tasks. It is worth noting that DE has always been one of the top performers in previous
competitions held at the IEEE Congress on Evolutionary Computation. Nonetheless, the lack of
systematic benchmarking of the DE related algorithms in different problem domains, the existence
of many open problems in DE, and the emergence of new application areas call for an in-depth
investigation of DE.
Scope and Topics
This special session aims at bringing together researchers and practitioners to review and re-analyze past achievements, to report and discuss latest advances, and to explore and propose future directions in this rapidly emerging research area. Authors are invited to submit their original and unpublished work in the areas including, but not limited to:
Organized by Martin Schlueter, Hernan Aguirre and Akira Oyama
Many real-world applications are based on both: continuous and discrete parameters.
Optimization models that consider these two kind of parameters simultaneously are
referred to as mixed-integer problems and are exceptionally difficult to solve.
While there exists comprehensive analysis of deterministic algorithms for mixed-integer problems,
evolutionary algorithms for this kind of problem is still a young and emerging field. Considering
the robustness of evolutionary algorithms and their often existing capability for parallelization and
multi/many-objective optimization, evolutionary algorithms can offer a significant new potential
for the class of mixed-integer problems. In the context of evolutionary computing,
this will be the first session especially dedicated to mixed-integer problems.
Scope and Topics
This special session intends to bring together researchers who apply evolutionary algorithms and other search heuristics on mixed integer optimization problems. Aim of this session is to provide a forum, where experience and new techniques in solving mixed-integer problems via such methods is shared and discussed. The main topics include but are not limited to:
Organized by Sanaz Mostaghim and Kalyanmoy Deb
This special session invites papers discussing recent advances in the development and
application of biologically-inspired multi-objective optimization algorithms.
Many problems from science and industry have several (and normally conflicting) objectives
that have to be optimized at the same time. Such problems are called multi-objective
optimization problems and have been subject of research in the past two decades. One of the
reasons why evolutionary algorithms are so suitable for multi-objective optimization is
because they can generate a whole set of solutions (the Pareto-optimal solutions) in a single
run rather than requiring an iterative one-solution-at-a-time process as followed in traditional
mathematical programming techniques.
The main aim of this special session organized within the 2016 IEEE Congress on
Evolutionary Computation (CEC'2016) is to bring together both experts and new-comers
working on Evolutionary Multi-objective Optimization (EMO) to discuss new and exciting
issues in this area.
Scope and Topics
We encourage submission of papers describing new concepts and strategies, and systems and tools providing practical implementations, including hardware and software aspects. In addition, we are interested in application papers discussing the power and applicability of these novel methods to real-world problems in different areas in science and industry. You are invited to submit papers that are unpublished original work for this special session at CEC 2015. The topics are, but not limited to, the following
Organized by Masaharu Munetomo, Juan Julián Merelo Guervós and Yuji Sato
Recent advances in cloud computing lead to a global infrastructure of “the Inter-cloud” (clouds of cloud systems) that can be utilized
through the Internet to provide with virtually infinite IT resources such as virtual machines and storage units just by calling web-service
APIs through the Internet.
It is necessary to have enough resources and complexities in the environment for the individuals to “evolve”. Cloud systems may even offer tens
of thousands of virtual machines, terabytes of memories and exabytes of storage capacity. Current trend toward many-core architecture increases
the number of cores even more dramatically: we may have more than a million of cores to offer extremely massive parallelization.
In this special session, we will discuss parallel and distributed evolutionary computation in the cloud era such as implementation of massively
parallel evolutionary algorithms employing cloud computing systems and services, parallel implementation of evolutionary algorithms on many-core
architectures including GPUs, and we also welcome any types of parallel and distributed evolutionary computation on any “informal” types of computing
environment in this special session including the following themes.
Scope and Topics
The topics are, but not limited to, the following
Organized by Fatih Tasgetiren, Sevil Sariyildiz, Ozer Ciftcioglu and Suganthan
Architecture has a profound impact on the environment that we spend much of our daily lives in. As such, it is important that the
outcomes of architectural design are well performing and suitable for their intended purpose. Architectural design is a process of
high complexity, which aims to satisfy design goals comprising hard, engineering aspects, together with soft, perceptual and cognitive
ones. Due to the excessive complexity of the design task, human cognition alone is often insufficient to ensure suitable outcomes.
Computational Intelligence and Soft Computing methods can aid in confidently arriving at high-performing architectural design solutions,
as well as provide valuable inspiration during the design process. As such, these methods are high on the contemporary scientific agenda.
The majority of architectural design problems involve real-valued decision variables and multiple objectives. For these reasons, they
prove to be an ideal field for Multi-Objective Real-Parameter Constrained Optimization applications of soft computing and computational
intelligence methods such as Evolutionary Algorithms, machine learning, fuzzy logic, simulation etc. These methods are capable of navigating
complex design spaces, such as those encountered in architectural design problems, and identifying best tradeoff solutions.
Scope and Topics
This session aims to put forward original contributions, latest research and development, and contemporary issues in the field of soft computing and computational intelligence for architectural and building design. It intends to collect a series of innovative, high quality papers on ideas, concepts, and technologies that make use of evolutionary algorithms in these research areas. Proposed submissions should be original, unpublished, and present novel fundamental research contributions from a theoretical or an application point of view. Session topics include (but are not limited to) the following:
Organized by Pietro S. Oliveto and Andrew M. Sutton
Bio-inspired search heuristics often turn out to be highly successful for optimization in practice. The theory of these randomized search heuristics explains the success or the failure of these methods in practical applications. Theoretical analyses lead to the understanding of which problems are optimized (or approximated) efficiently by a given algorithm and which are not. The benefits of theoretical understanding for practitioners are threefold.
Scope and Topics
Potential authors are invited to submit papers describing original contributions to foundations of evolutionary computation. Although we are most interested in theoretical foundations, computational studies of a foundational nature are also welcome. The scope of this special session includes (but is not limited to) the following topics:
Organized by Yan Pei and Hideyuki Takagi
Evolutionary computation (EC) with human factors, such as Interactive EC (IEC), EC for
human-related applications, EC-based visual/auditory/haptic design, EC for analyzing
human characteristics and others, is an approach whereby such properties as human
knowledge, experience, and preference are embedded into an optimization/design process
and application tasks. By embedding the human being itself into an optimization system
or using EC for humans, EC techniques become applicable to tasks for which it is difficult
to construct an evaluation system or measure the evaluations. EC with human factors has
also been applied to artistic areas such as creating music or graphics, engineering areas
such as sound and image processing, control and robotics, virtual reality, data mining,
media database retrieval, and others, including geoscience, education, games, and many
other tasks in various areas. From a framework point of view, EC with human factors can
be realized with any EC algorithm by replacing the fitness function with a human user.
Several EC techniques are used in EC with human factors, such as interactive genetic
algorithms, interactive genetic programming, interactive evolution strategy, human based
genetic algorithm, interactive particle swarm optimization, interactive differential
evolution, and others. There are many directions in EC with human factors research, such
as expanding the applications of EC with human factors; expanding EC with human factors
frameworks; applying EC with human factors in a reverse engineering approach to analyze
humans and thus advance; accelerating EC with human factors searches and improving
IEC interfaces. The proposed session provides researchers and educators a platform to
report and discuss state of the art research and study in subjects of evolutionary
computation with human factors.
Scope and Topics
The topics within (but not limited in) these scopes are especially welcome to submit to this special session:
Organized by Mengjie Zhang, Muhammad Iqbal, Yi Mei and Bing Xue
Data mining, machine learning, and optimisation algorithms have achieved promises in many real-world tasks, such as classification,
clustering and regression. These algorithms can often generalise well on data in the same domain, i.e. drawn from the same feature
space and with the same distribution. However, in many real-world applications, the available data are often from different domains.
For example, we may need to perform classification in one target domain, but only have sufficient training data in another (source)
domain, which may be in a different feature space or follow a different data distribution. Transfer Learning aims to transfer knowledge
acquired in one problem domain, i.e. the source domain, onto another domain, i.e. the target domain. Transfer learning has recently
emerged as a new learning framework and hot topic in data mining and machine learning.
Scope and Topics
Evolutionary computation techniques have been successfully applied to many real-world problems, and started to be used to solve transfer learning tasks. Meanwhile, transfer learning has attracted increasing attention from many disciplines, and has been used in evolutionary computation to address complex and challenging issues. The theme of this special session is transfer learning in evolutionary computation, covering ALL different evolutionary computation paradigms. The aim is to investigate in both the new theories and methods on how transfer learning can be achieved with different evolutionary computation paradigms, and how transfer learning can be adopted in evolutionary computation, and the applications of evolutionary computation and transfer learning in real-world problems. Authors are invited to submit their original and unpublished work to this special session. Topics of interest include but are not limited to:
Organized by Saúl Zapotecas-Martínez, Bilel Derbel, Qingfu Zhang and Carlos Artemio Coello Coello
The purpose of this special session is to promote the design, study, and validation of generic
approaches for solving multiobjective optimization problems based on the concept of
decomposition. Decompositionbased Evolutionary Multiobjective Optimization (DEMO)
encompasses any technique, concept or framework that takes inspiration from the “divide and
conquer” paradigm, by essentially breaking a multiobjective optimization problem into several
subproblems for which solutions for the original global problem are computed and aggregated in a
cooperative manner. This simple idea, which is rather standard in computer science and information
systems, allows to open up new exciting research perspectives and challenges both at the
fundamental level of our understanding of multiobjective problems, and in terms of designing and
implementing new efficient algorithms for solving them. Generally speaking, the special session
will focus on stochastic evolutionary approaches for which decomposition is performed with respect
to the objective space, typically by means of scalarizing functions like in the MOEA/D framework.
We, however, encourage contributions reporting advances with respect to other decomposition
techniques operating in the decision space as done in the cocalled conesperation methods; or
other hybrid approaches taking inspiration from operations research and mathematical
programming. In fact, many different DMOEAs variants have been proposed, studied and applied to
various application domains in recent years. However, DMOEAs are still in their very early
infancy, since only few basic design principles have been established compared to the huge body of
literature dedicated to other wellestablished approaches (e.g. Pareto ranking, indicatorbased
techniques, etc), and relatively few research forums have been dedicated to the study of DEMO
approaches and their unification. The main goal of the proposed session is to encourage research
studies that systematically investigate the critical issues in DMOEAs at the aim of understanding
their key ingredients and their main dynamics, as well a to develop solid and generic principles for
designing them. The long term goal is to contribute to the emergence of a general and unified
methodology for the design, the tuning and the performance assessment of DMOEAs.
Scope and Topics
The special session will be a nice opportunity for researchers in the evolutionary and multiobjective
optimization filed to exchange their recent ideas and advances on the design and analysis of DEMO
approaches. In this respect, we are welcoming high quality papers in theoretical, developmental,
implementational, and applied aspects of DEMO approaches. More particularly, the special session
will encourage original research contributions that address new and existing DMOEAs, their
contributions and relationships to other methodologies dedicated to multiobjective optimization in
terms of: algorithmic components, decomposition strategies, collaboration among different search
procedures, design of new specialized search procedures, parallel and distributed implementations,
incorporation of user interaction, combination and hybridization with other traditional (heuristic or
exact) techniques, strategies for dealing with many objectives, noisy problems, and expensive
problems, problem solving and applications, etc. The main focus will be on eliciting the main
design principles that lead to effective and efficient cooperative search procedures among the so
defined single or multiple objective subproblems.
The topics of interests include (but are not limited to) the following issues :
Organized by Michael G. Epitropakis, Xiaodong Li and Andries Engelbrecht
Population or single solution searchbased optimization algorithms (i.e. {meta,hyper}heuristics)
in their original forms are usually designed for locating a single global solution. Representative
examples include among others evolutionary and swarm intelligence algorithms. These search
algorithms typically converge to a single solution because of the global selection scheme used.
Nevertheless, many realworld problems are “multimodal” by nature, i.e., multiple satisfactory
solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of
them, so that a decision maker can choose one that is most proper in his/her problem domain.
Numerous techniques have been developed in the past for locating multiple optima (global
and/or local). These techniques are commonly referred to as “niching” methods. A niching
method can be incorporated into a standard searchbased optimization algorithm, in a
sequential or parallel way, with an aim to locate multiple globally optimal or suboptimal
solutions,. Sequential approaches locate optimal solutions progressively over time, while
parallel approaches promote and maintain formation of multiple stable subpopulations within a
single population. Many niching methods have been developed in the past, including crowding,
fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more
recent times, niching methods have also been developed for metaheuristic algorithms such as
Particle Swarm Optimization, Differential Evolution and Evolution Strategies.
Scope and Topics
Most of existing niching methods, however, have difficulties that need to be overcome before they can be applied successfully to realworld multimodal problems. Some identified issues include: difficulties to prespecify some niching parameters; difficulties in maintaining found solutions in a run; extra computational overhead; poor scalability when dimensionality and modality are high. This special session aims to highlight the latest developments in niching methods, bringing together researchers from academia and industries, and exploring future research directions on this topic. We invite authors to submit original and unpublished work on niching methods. Topics of interest include but are not limited to:
Organized by Suganthan, Mostafa Z. Ali, Qin Chen, Liang and B. Y. Qu
This special session will be used to receive conference papers submitted to the competition on the above competition. The details of the competition are presented below. These details are equally applicable to the SS also.
Competition Goals
The goals are to evaluate the current state of the art in single objective optimization with bound constraints and to propose novel benchmark problems with diverse characteristics. The algorithms will be evaluated with very small number of function evaluations to large number of function evaluations as well as single solution to multiple solutions. Under the above scenarios, novel problems will be designed for the first time to emulate real-world problem solving. In particular the following cases would also be considered:
Contributions to the Evolutionary Computation Community
Single objective numerical optimization is the most important class of problems. All new evolutionary and swarm algorithms are tested on single objective benchmark problems. In addition, these single objective benchmark problems can be transformed into dynamic, niching composition, computationally expensive and many other classes of problems.
How to submit an entry and how to evaluate them
Potential authors are asked to make use of the codes of benchmark problems to be distributed from the competitions web pages to test their algorithms either with or without surrogate methods. The authors have to execute their novel or existing algorithms on the given benchmark problems and present the results in various formats as outlined in the technical report. The evaluation criteria will also be specified in the technical report. The authors are asked to prepare a conference paper detailing the algorithms used and the results obtained on the given benchmark problems and submit their papers to the associated special session within CEC 2016. The authors presenting the best results should also be willing to release their codes for verification before declaring the eventual winners of the competition.
Special Session Associated With This Competition
This competition requires all entries to have an associated conference paper submitted. We also expect at least one author of each entry to register, attend the conference and present their papers.
Organized by Stefano Nichele and Gunnar Tufte
The special session on “evolution of physical systems and matter” encompasses understanding,
modeling and applying biologically inspired mechanisms to physical systems, where evolution
occurs entirely in the realworld physical substrates rather than in simulation. Using real physical
systems or materials for computation may allow evolution to exploit underlying physical
properties that may not be available in simulation (e.g. due to «reality gap»), thus allowing the
discovery of novel evolutionary solutions.
The aim of this special session is to bring together researchers in order to share ideas and
innovations on biologically inspired mechanisms applied to physical systems. Application areas
include bioinspired algorithms applied to physical systems, the creation of novel physical
devices, novel or optimized designs for physical systems, adaptive physical systems, novel
evolutionary techniques for embedded evolution and embedded computation, and novel
material substrates that may support computation.
This special session is inspired by overlapping principles that emerged in several domains, such
as EvolutioninMaterio (Pask, 1959; Miller and Downing, 2002), where underlying physics of
materials is used as computation substrate.
Notable examples of evolution of physical systems and matter range from novel nanoscale
materials for computation (Broersma et al., 2012), to FPGAs (Thompson, 1996), embodied
evolution (Watson et al., 2002), 3D printers (Rieffel and Sayles, 2010), EHW (Yao and Higuchi,
1996; Greenwood and Tyrrell, 2006), electronic devices (Hornby at al., 2006) and robotics
(Zykov et al., 2004).
Scope and Topics
The special session on Evolutionary Physical Systems and Matter intends to collect both theoretical machines and contributions/principles, and practical applications. Real application scenarios, from nanotechnology to buildings, are welcome. The topics of this special issue include (but are not limited to):
Organized by Daniel Molina, Swagatam Das and Antonio La Torre
In the past two decades, many nature-inspired optimization algorithms have been developed and applied successfully for solving a wide range of optimization problems, including Simulated Annealing (SA), Evolutionary Algorithms (EAs), Differential Evolution (DE), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Estimation of Distribution Algorithms (EDA), etc. Although these techniques have shown excellent search capabilities when applied to small or medium sized problems, they still encounter serious challenges when applied to large scale problems, i.e., problems with several hundreds to thousands of variables. The reasons appear to be two-fold. Firstly, the complexity of a problem usually increases with the increasing number of decision variables, constraints, or objectives (for multi-objective optimization problems), which may prevent a previously successful search strategy from locating the optimal solutions. Secondly, as the size of the solution space of the problem grows exponentially with the increasing number of decision variables, there is an urgent need to develop more effective and efficient search strategies to better explore this vast solution space with a limited computational budget.
In recent years, researches on scaling up EAs to large scale problems have attracted much attention, including both theoretical and practical studies. Existing work on this topic are still however rather limited, given the significance of the scalability issue.
Scope and Topics
This special session is devoted to highlight the recent advances in EAs for handling large scale global optimization (LSGO) problems, involving single or multiple objectives problems, unconstrained or constrained search spaces and binary/discrete, real, or mixed decision variables. More specifically, we encourage interested researchers to submit their original and unpublished work on:
Organized by Andy M Tyrrell and Martin A Trefzer
Evolvable systems encompass understanding, modelling and applying biologically inspired mechanisms to physical systems. Application areas for bio-inspired algorithms include the creation of novel physical devices/systems, novel or optimised designs for physical systems and for the achievement of adaptive physical systems. Having showcased examples from analogue and digital electronics, antennas, MEMS chips, optical systems as well as quantum circuits in the past, we are looking for papers that apply techniques and applications of evolvable systems to these hardware systems, and in particular this year looking for papers in the areas of evolutionary robotic and evolutionary many-core system.
Scope and Topics
Topics include but are not limited to:
Organized by David Camacho
This special session will be devoted to the study of new Bio-inspired and Evolutionary-based algorithms which have been applied into open, dynamic and complex problems where the application of Unsupervised Learning algorithms can provide new insights in these kind of domains. The special session will be particularly focused on the practical application of bio-inspired methods (evolutionary strategies, swarm intelligence methods, or nature-based approaches amongst others) and their application to unsupervised algorithms as Clustering (K-means, Hierarchical clustering, etc.), Grouping, Hidden Markov Models, or Latent variable models (EM algorithm, method of moments, …) approaches to mention only some few. The application to real and complex scenarios, as community finding in very large social networks, data analysis in wireless sensor networks, real applications in industry and engineering, automatic and semi-automatic malware detection, energy, and so many others, will be welcomed. Therefore, the main goals of this special session will be two-fold: On the one hand, to look for new algorithms and techniques proposals based on Bio-inspired and EC, which have been successfully combined with other unsupervised approaches. On the other hand, to look for new application domains, and real problems, where the application of Bio-inspired and EC combined with unsupervised methods have demonstrated an outstanding performance against other traditional approaches.
Paper acceptance and publication will be judged on the basis of their quality and relevance to the symposium themes, clarity of presentation, originality and accuracy of results and proposed solutions.
Scope and Topics
Topics include, but are not limited to:
Organized by Martin Lukac, William N. N. Hung and Claudio Moraga
As quantum information and computation research continues to develop, we will see increasing
interest in adapting the philosophy of quantum computing, information theory and
ideology into other, more traditional aspects of computational research. Although the hardware
technology to realize quantum computing is still yet to be materialized, research about
the theoretical aspects of quantum computing and its ideology has enjoyed some success
with artificial and computational intelligence.
The main aim of this special session is to bring together experts in algorithms, quantum
information, quantum algorithms, physicists and hardware designers to discover new applications
and features resulting from not only a cross-disciplinary information exchange but
also from discussion between engineers and scientists working in the larger area of quantum
information and computation.
Scope and Topics
This special session focus on combining various aspects of quantum computing, information theory, and other aspects with existing fields in computational intelligence. In particular the usage of classical algorithms to solve quantum problems, usage of quantum algorithms to solve classical problems and the usage of quantum algorithms for solving quantum problems are three main sought general ideas to be the center focus of this special session. Some typical research areas that will be discussed in this special session include (but are not limited to) the following:
Organized by Robert G. Reynolds, Mostafa Ali, Ziad Kobti and Ponnuthurai Nagaratnam Suganthan
Cultural Algorithms are computational models of Cultural Evolution. As such they provide a framework within which experiences of problem solvers embedded in a social fabric influence the collective knowledge of that group, its Culture. Culture is viewed as a network of passive and active knowledge sources. These knowledge sources are able integrate this knowledge, either individually or collectively, into their structure using data mining and machine learning tools. This updated Cultural Knowledge then is used to direct the modifications to individuals and their plans in the population space. Cultural Algorithms are an ideal framework for problems that require large amounts of domain knowledge to direct the collective decisions of individuals in the population. As such Cultural Algorithms have been successfully applied to problems in complex hierarchical systems characterized by large and extensive data sets (big data), many domain constraints, multiple objectives, and multiple agents within a large and spatially distributed social network.
Cultural Algorithm can also provide a flexible framework for hybridization with other socially motivated technologies such as particle swarm optimization, differential evolution, ant colony optimization, and co-evolutionary approaches among others. These hybrid systems have required extension to Classical Cultural Algorithms such as multi-population and multi-belief spaces, and novel approaches to using belief space knowledge to drive evolutionary search. This special session is designed to provide an overview of the diverse hybrid approaches that have been proposed beyond the classical Cultural Algorithm. Cultural Algorithm designers are invited to submit their latest extensions and share a glimpse of the future of Cultural Algorithms.
Scope and Topics
This special session will focus on all aspects of Cultural Algorithms theory and application. Topics of interest may cover, but are not limited to the following:
Organized by Markus Wagner , William Langdon and Brad Alexander
In the past ten years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to software engineering in which search-based optimisation algorithms are used to address problems. The approach is attractive because it offers a suite of adaptive automated and semi-automated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. SBSE has been applied to a number of software engineering activities, right across the life-cycle from requirements engineering, project planning and cost estimation through testing, to automated maintenance, service-oriented software engineering, compiler optimisation and quality assessment.
With this special session, we are providing an opportunity to showcase recent breakthroughs in this field.
Scope and Topics
We invite submissions on any aspect of SBSE, including, but not limited to, theoretical results and interesting new applications. The suggested topics cover the entire range of functional and non-functional properties:
Organized by Mardé Helbig, Kalyanmoy Deb and Andries Engelbrecht
Most real-world optimization problems have more than one objective, with at least two
objectives that are in conflict with one another. The conflicting objectives of the
optimization problem lead to an optimization problem where a single solution does not
exist, as is the case with single-objective optimization problems (SOOPs).Instead of a
single solution, a set of optimal trade-off solutions exists, referred to as the Pareto-
optimal front (POF) or Pareto front. This kind of optimization problems are referred to
as multi-objective optimization problems (MOOPs).
In many real-world situations the environment does not remain static, but is dynamic
and changes over time. However, in recent years most research was focussed on
either static MOOPs or dynamic SOOPs. When solving dynamic multi-objective
optimization problems (DMOOPs) an algorithm has to track the changing POF over
time, while finding solutions as close as possible to the true POF and maintaining a
diverse set of solutions. Some of the major challenges in the field of dynamic multi-
objective optimization (DMOO) are a lack of a standard set of benchmark functions, a
lack of standard performance measures, issues with performance measures currently
being used for DMOO and a lack of a comprehensive analysis of existing algorithms
applied to DMOO.
Therefore, this special session aims to highlight the latest developments in dynamic
multi-objective optimization (DMOO) in order to bring together researchers from both
academia and industry to address the above mentioned challenges and to explore
future research directions for the field of DMOO. We invite authors to submit original
and unpublished work on DMOO.
Scope and Topics
Topics of interest include, but are not limited to:
Organized by Ana Maria Madureira,
Decision Support Systems are interactive software-based system intended to support business and organizational decision-making activities in order to help decision makers to compile information, model business processes, solve problems and make decisions. This special session should address the design of decision support systems with dynamic adaptation and optimization become increasingly important incorporating expert’s knowledge from an Ambient Intelligence perspective. It gives relevance to the idea of human-centered design and the intelligence needed to allow systems to foresee user’s needs and preferences.
This special session intends to present and discuss the integration of recent developments of Evolutionary Computation, Self-Organization, Decision Support Systems, Information System and Human Computer Interaction, Ambient Intelligence, in general.
Scope and Topics
The topics of interest for this special session include, but are not limited to:
Organized by Ivan Zelinka, Guanrong Chen, Ponnuthurai Nagaratnam Suganthan, Andy Adamatzky and René Lozi
Evolutionary computation as well as complex systems dynamics and structure is a vibrant area of research in the last
decades. To date, large set of nonlinear complex systems exhibiting chaotic and/or emergent behaviours are
observed, analysed and used. They include evolutionary algorithms, as Wright and Agapie proposed in Cyclic and
Chaotic Behaviour in Genetic Algorithms in 2001 on GECCO conference. Such algorithms, systems and its mutual
fusion form an essential part of science and engineering. Most notable examples include chaos control and
synchronization, chaotic dynamics for pseudo-random number generators in evolutionary algorithms, modelling of
evolutionary dynamics like complex networks, use of chaos game with evolutionary algorithms and/or use evolution
in complex systems design and analysis (evolutions in complex networks). Recently, the study of such phenomena is
focused not only on the traditional trends but also on the understanding and analysis of principles, with the new
intention of controlling and utilizing it toward real-world applications.
This special session is concerned about evolutionary dynamics as a complex process that can be modelled and
analysed by means of complex networks tools. Evolutionary algorithms are complex systems that consist of many
interacting units (e.g. individuals, ...) where interactions can be recorded as a virtual complex network. Its attributes
can be then analysed, studied and used to better understand dynamical processes inside system under consideration
and/or for its control or optimization of its behaviour and structure.
In the session are welcome original research papers discussing new results, based on previous research papers, on
PSO, differential evolution, SOMA, GA, ABC and other algorithms whose dynamics was converted into related
complex network structure, analysed and used to improve performance of discussed bio-inspired algorithms
Scope and Topics
The aim of this session is to bring together people from fundamental research, experts from various applications of evolutionary algorithms and complex systems, to develop mutual intersections and fusion. Also discussion of possible hybridization amongst them as well as real-life experiences with computer applications will be carried out to define new open problems in this interesting and fast growing field of research. The special session will focus on, but not limited to, the following topics:
Organized by Liqiang Hou, Massimilano Vasile and Edmondo Minisci
Epistemic uncertainties due to lack of knowledge can be found in many real-life design problems.
The uncertainties cannot be modeled using the conventional statistical tools, e.g. Gaussian
distribution model. Instead, some tools like Evidence Theory or belief function theory can be
used to model the uncertainties. Plausibility and belief, which describe the upper and lower
bounds of the possible results, are used to evaluate the uncertainty impacts.
In some more complex problems, the epistemic uncertainties takes a more complex form, e.g.
the uncertainties with Basic Probability Assignment (BPA) structure of the estimated values and
variances. The design optimization under uncertainties can be formulated as multi-objective
optimization problem, with the objectives to minimize system function values and the objective
to maximize corresponding beliefs. In both cases, a step-like optimal Pareto front should be
obtained.
Another problem the designers face frequently in real-life engineering optimization is the
fidelity management in the design optimization. A common way to tackle the design problems
with multi-fidelity models is to start the optimization with the low fidelity model, and then use
the expensive high fidelity model to improve the design solutions. The strategy works well in
some problems but have a risk to trap into the pseudo optimal solutions if the values the low
level model predicts is not consistent well with the high fidelity ones in that region. Therefore,
the model fidelity management strategy for determining when the high fidelity model should be
used is required. The strategy can be kriging, space mapping, and trust region method, etc.
However, most of the strategies are designed for the single objective design cases, e.g to
maximize the lift-drag ratio of the airfoil under specified Mach number. For a more complex
design problem with multi-objective optimization under uncertainties, such strategies should be
improved before they can be used in the design optimization.
In recent years, methods of optimization under epistemic uncertainties and multi-fidelity
models are developed, but in a separate way. However, in many cases, both the epistemic
uncertainties and the multi-fidelity models are involved. The session aims to develop efficient
and combinatory strategies for these problems, and incorporate efficiently the evidence
computation and model fidelity management in the design optimization. As the evidence
computation can cost huge amount of computational resources if numbers of epistemic
uncertainties are involved, high efficiency approximation techniques are required, and bench
mark functions for the design optimization should be developed as well.
Scope and Topics
The design problem under epistemic uncertainties and multi- fidelity models can be found in
many practical problems, particularly in aerospace engineering design problems. The issues
involved include MOO algorithm, uncertainty modeling, model fidelity management, and the
strategies to integrate them to implement the optimization efficiently.
The session seeks to promote the discussion and presentation of novel works related with (but not limited to) the following issues:
Organized by Vaclav Snasel and Ajith Abraham
One of the biggest challenges in Big Data analysis is to solve the problem of “semantic gaps” between low-level features and high-level semantic concepts. Geometrical and Topological methods are tools for analyzing highly complex data. This methods create a summary or compressed representation of all of the data features to help rapidly uncover critical patterns and relationships in data. The idea of constructing summaries over whole domains of parameter values involves understanding the relationship between geometric objects constructed from data using various parameter values.
The main aim of this Special Session is to explore the new frontiers of big data computing for Geometrical and Topological methods through computational intelligence techniques, in order to more efficiently analyzing highly complex data.
Scope and Topics
The proposed special session aims to bring together theories and applications of Geometrical and Topological methods in Evolutionary Computing to analyze highly complex data. Topics of interest include, but are not limited to:
Organized by Bogdan Filipic and Thomas Bartz-Beielstein Carlos A. Coello
Many real-world optimization problems involve multiple, often conflicting objectives and rely on computationally expensive simulations to assess these objectives. Such multiobjective optimization problems can be solved more efficiently if the simulations are partly replaced by accurate surrogate models. Surrogate models, also known as response surface models or meta-models, are data driven models built to simulate the processes or devices that are subject to optimization. They are used when more precise models, such as those based on the finite element method or computational fluid dynamics, spend too much time and resources. While surrogate models allow for fast simulation and assessment of the optimization objectives, they also represent an additional source of impreciseness. In multiobjective optimization, this may constitute a particular challenge when comparing candidate solutions. The aim of this special session is to bring together researchers and practitioners working with surrogate-based multiobjective optimization algorithms to present recent achievements in the field and discuss directions for further work.
Scope and Topics
Prospective authors are invited to submit their original and unpublished work on all aspects of surrogate-assisted multiobjective optimization. The scope of the special session covers, but is not limited to the following topics:
Organized by Yun Li, Cindy Goh, Leo Chen and Zhi-Hui Zhan
This Special Session is dedicated to the latest developments of computational intelligence for Industry 4.0, the first a-priori engineered (and the fourth) ‘Industrial Revolution’. Focusing on smart manufacturing and cyber-physical systems so far, efforts in Industry 4.0 have lacked smart design and business elements for manufacture that are necessary in completing this unprecedented upgrade of value chain. Computational intelligence has however provided an extra-numeric, as well as efficiently-numeric, tool to realise this goal. The Special Session therefore encourages and reports applications to Industry 4.0 in the era of interactive cloud computing and data science.
Scope and Topics
Computational intelligence, primarily comprising artificial neural network and learning systems, evolutionary computation, and fuzzy logic and systems, is a set of nature-inspired modelling and optimisation approaches to complex real-world problems, to which traditional approaches such as first principles modelling and statistical or curve fitting are ineffective or incapable of addressing. We are soliciting original research papers or reviews that would shape and advance a smart design and business environment for Industry 4.0. Papers addressing how to revolutionise the way that smart designs are created and smart machines are built, thereby leading to a step improvement in manufacturing autonomy and industrial efficiency, performance and competitiveness, will be most welcome. Main Topics (include but are not limited to):
Organized by Faiyaz Doctor, Christian Wagner, Dongrui Wu and Marie-Jeanne Lesot
Affective Computing (AC) is “computing that relates to, arises from, or deliberately influences emotions,” as initially coined by
Professor R. Picard (Media Lab, MIT). It has been gaining popularity rapidly in the last decade because it has great potential in the
next generation of human-computer interfaces. One goal of affective computing is to design a computer system that responds in a rational
and strategic fashion to real-time changes in user affect (e.g., happiness, sadness, etc), cognition (e.g., frustration, boredom, etc.)
and motivation, as represented by speech, facial expressions, gestures, physiological signals, neurocognitive performance, etc. Physiological
Computing (PC) relates to computation that incorporates physiological signals in order to produce useful outputs (e.g., in computer-human
interaction). It mainly differs from AC in the sense that its foremost focus is not the modeling of affect but rather the utilization of
physiological information generally.
Practical applications of AC and PC based systems seek to achieve a positive impact on our everyday lives by monitoring, recognising and acting
on our emotional states and physiological signals. Integrating these sensing modalities into intelligent and pervasive computing systems will reveal
a far richer picture of how our fleeting emotional responses, changing moods, feelings and sensations, such as pain, touch, tastes and smells, are a
reaction to or influence how we implicitly or explicitly interact with the environment and increasingly the connected computing artifacts within.
The integration and use of AC and PC raise many new challenges for signal processing, machine learning and computational intelligence. Fuzzy Logic
Systems in particular provide a highly promising avenue for addressing some of the fundamental research challenges in AC/PC where most data sources
such as: body signals (e.g., heart rate, brain waves, skin conductance and respiration) facial features, speech and human kinematics are very noisy/uncertain
and subject-dependent. Clearly however, other key areas of CI research, such as evolutionary learning algorithms and neural network based classifiers provide
essential tools to address the significant challenge of AC/PC.
Scope and Topics
The Computational Intelligence and Physiological and Affective Computing special session aims to bring together researchers from the three areas of CI to discuss how CI techniques can be used individually or in combination to help solve challenging AC/PC problems, and conversely, how physiological and affect (emotion) and its modeling can inspire new approaches in CI and its applications. Topics of interest for this special session include but are not limited to:
Organized by Keeley Crockett and Joao Paulo Carvalho
Although language, or linguistic expressions, undoubtedly contains fuzziness in nature, very little research has been conducted in related fields in recent years,
as it was shown in “A Critical Survey on the use of Fuzzy Sets in Speech and Natural Language Processing”, Proc. of the IEEE WCCI 2012, Brisbane, Australia. This is
partly because of the prevalence of probabilistic machine learning technologies in the natural language processing field. However, there has been a growing recognition
that fuzziness found in every aspect of human language has to be adequately captured and that recent developments in the fields of computational intelligence such as
computing with words can make a contribution. This session will follow on from the successful, special session entitled “Fuzzy Natural Language Processing” which was
held at IEEE FUZZ 2015 in Istanbul and IEEE FUZZ 2013 in India and the hybrid special session held at the 2014 IEEE World Congress on Computational Intelligence in Beijing.
The aim of this Special Session is therefore to explore new techniques and applications in the field of computational intelligence approaches to natural language processing.
Scope and Topics
The session will provide a forum to disseminate and discuss recent and significant research efforts in fuzzy, neural and evolutionary methods for natural language processing in addition to hybrid and emerging computational intelligence paradigms. It invites researchers from different related fields and gathers the most recent studies including but not limited to:
Organized by Vassilis Plagianakos and Roberto Tagliaferri
Bioinformatics, computational biology, and bioengineering present a number of complex problems with large search spaces. Recent applications of Computational Intelligence (CI) in this area suggest that they are well-suited to this area of research. This special session will highlight applications of CI to a broad range of topics. Particular interest will be directed towards novel applications of CI approaches to problems in these areas. The scope of this special session includes evolutionary computation, neural computation, fuzzy systems, artificial immune systems, swarm intelligence, ant- colony optimization, simulated annealing, and other CI methods or hybridizations between CI approaches. Applications of these CI methods to bioinformatics, computational biology, and bioengineering problems are the main focus of this hybrid special session. There is a clear interest in both the Computational Intelligence community and Biology communities for this special session. This hybrid special session is sponsored by the IEEE CIS BBTC (Computational Intelligence Society - Bioinformatics and Bioengineering Technical Committee).
Scope and Topics
Topics of interest include but are not limited to:
Organized by Daniel Ashlock and Ruck Thawonmas
Games are an ideal domain to study computational intelligence (CI) methods because they pro- vide affordable, competitive, dynamic, reproducible environments suitable for testing new search algorithms, pattern-based evaluation methods, or learning concepts. They are also interesting to observe, fun to play, and very attractive to students. Additionally, there is great potential for CI methods to improve the design and development of both computer games and non-digital games such as board games. This special session aims at gathering not only leading researchers, but also young researchers as well as practitioners in this field who research applications of computational intelligence methods to computer games.
Scope and Topics
In general, papers are welcome that consider all kinds of applications of CI methods (evolutionary computation, supervised learning, unsupervised learning, fuzzy systems, game-tree search, etc.) to games (card games, board games, mathematical games, action games, strategy games, role-playing games, arcade games, serious games, etc.). Examples include:
Organized by Chuan-Kang Ting, Tatiana Tambouratzis, Francisco Fernández de Vega, Stefanos Kollias, Palle Dahlstedt and Aggelos Pikrakis
Computational intelligence (CI) techniques, including evolutionary computation, neural networks, and fuzzy systems, have gained several promising
results and become an important tool in computational creativity, such as in music, visual art, literature, architecture, and industrial design.
Scope and Topics
The aim of this special session is to reflect the most recent advances of CI for Music, Art, and Creativity, with the goal to enhance autonomous creative systems as well as human creativity. This session will allow researchers to share experiences and present their new ways for taking advantage of CI techniques in computational creativity. Topics of interest include, but are not limited to, CI technologies in the following aspects:
Organized by Steve S. H. Ling, Xin Xu, H.K. Lam, Hung T. Nguyen, Kit Yan Chan and Rifai Chai
Nowadays, computational intelligence methods play an importance role in the health technology research, which brings together complementary interdisciplinary research practice, in the development of innovative medical devices and biotechnological processes for health applications. In general, feasible results may be obtained by applying traditional artificial intelligence methods to a health application. However, health technologies demand to be more robust, more precise and more efficient. Applying traditional artificial intelligence methods may not achieve multiple goals for a particular health application. Recent research indicates that the advanced computational intelligence methods can help to achieve a more satisfactory performance for a particular health application. With the rapidly growing complexities of health design problems and more demanding quality of health applications, the development of advanced computational intelligence methods for health technologies is hence a critical issue. This special issue intends to bring together researchers to report the latest results or progress in advanced computational intelligence methods for health technologies.
Scope and Topics
The field of interest of this special issue is the application of recent concepts and methods of computational intelligence in health technologies. The topics cover a broad range of health applications, and we are soliciting contributions on (but not limited to) the following aspects:
Organized by Barbara Hammer, D. Frank Hsu and Marios M. Polycarpou
Due to improved sensor technology, increasing storage space, and data availabil-
ity, digital data sets are rapidly increasing with respect to size, dimensionality,
and complexity. On the one hand, big and streaming data sets are becoming
more and more popular in complex systems such as industrial manufacturing
processes, surveillance, finance, social networks, or health-care. On the other
hand, the dimensionality of data can easily reach a few thousand and data
sources are often enriched by auxiliary information which gives crucial clues to
avoid overfitting. These facts demand for advanced methods and tools which
can cope with these big and complex data with respect to not only its sheer size,
but also its often challenging statistical properties such as heterogeneous qual-
ity, data trends, presence of rare events, and necessity for strong regularisation.
Scope and Topics
This special session will focus on advanced data analysis for big and streaming
data which enable a reliable and computationally feasible access to such data
sets. Submissions are encouraged according to the following non-exhaustive list
of topics:
Organized by Okan Duru and Matthew Butler
Among several topics in computational intelligence, there is a growing interest on economic and financial analysis as
well as models for economic and financial management. This special session invites submissions addressing computational
advancements and intelligent solutions for economic and financial management in addition to theoretical discussions on
the development and use of intelligent techniques.
Scope and Topics
This special session is particularly organized for applications and modelling practice in economics and finance. The scope
of this special session includes but not limited to:
Organized by Tatiana Tambouratzis, Andreas-Georgios Stafylopatis, Kostas Karatzas and Mikko Kolehmainen
Environmental sustainability has become a topic of particular interest - and concern in - in the
last 20 years. Environmental sustainability is focused upon responsible decision-making and
action-taking for the protection of the environment, thus boosting the ability of the
environment to continue to support life. At the same time, environmental sustainability
tackles the issue of developing optimal practices that will reduce - and eventually minimise -
the negative impact on the environment. Further to pollution, waste, and energy reduction,
environmental sustainability aims at developing processes that will help human societies to
become completely sustainable in the future. The use of non-parametric, noise-resistant, and
learn-by-example approaches is pertinent to this end, and constitutes the focus of this
WCCI’16 Special Session, a sequel to the IJCNN'15 Special Session under the same name.
Scope and Topics
Topics of interest include but are not limited to:
Organized by Michael J Watts and Jie Yang
The aim of this special session is to provide a forum for recent research in the application of computational intelligence in the areas of
ecological informatics, ecological modelling and environmental modelling. This is a highly topical area and is open to a broad array of methods
from the field of computational intelligence, and follows from the successful special session “Applications of Computational Intelligence in
Ecological Informatics and Environmental Modelling” held at WCCI 2014.
Ecological informatics and the related field of ecological modelling involve constructing computational models of ecological systems. Environmental
modelling is closely related and involves constructing models of the physical environment that biological eco-systems inhabit. Ecological models
include such things as the distribution or abundance of particular species, models of the interaction
between multiple species, and models of the future development of populations of species. Environmental
models cover such topics as the climate and climate change and the detection of landscape features. Models
have also been constructed of waste management systems, water quality and drainage systems; Water contamination events; Flood modelling and air pollution.
The amount of data describing global and local environments and the eco-systems that inhabit them is rapidly increasing. As these are highly-complex systems,
algorithms from the field of computational intelligence have already been widely applied to modelling this data. Previous work has
successfully solved numerous problems in ecological and environmental modeling using artificial neural networks
evolutionary algorithms, and fuzzy systems approaches. In each case, computational intelligence methods were shown to be more effective at solving the problem
than the alternative methods.
This session is of wide appeal to participants of WCCI 2016 because it involves all three primary fields of interest of the conference: Artificial Neural Networks
(ANN), Fuzzy Systems (FS) and Evolutionary Algorithms (EA). It follows on from the successful special session held at WCCI 2014. It is also an emerging area of
research as the majority of publications and researchers in this area continue to be ecologists rather than computational intelligence researchers. There is
therefore a continuing scope for researchers in computational intelligence to make a strong contribution to this emerging field.
Scope and Topics
Topics relevant to this special session include, but are not limited to, the following applications of computational intelligence, including ANN, FS, and EA:
Organized by Francesco Masulli, Sanaz Mostaghim and Alexandru G Floares
Due to the explosive evolution of Information Technology and Computer Science, Biomedicine entered in Big Data Age, and this is really a
scientific revolution, not just a fashion. As always, the technological aspects evolve faster than the scientific community mentality. Transforming
Big Data into Big Knowledge and developing a Knowledge-Based Medicine require new visions and approaches. Companies, facing the Big Data challenges,
are moving faster in the right direction than the biomedical community, being under a stronger competitive pressure. They were forced to renounce to
wishful thinking, like the idea that a few variables, embedded in a few rules, discovered using the old fashion statistics, will give intelligent support
for business decisions. We have to do the same for developing adequate diagnosis, prognosis, and response to treatment predictive models/tests.
On the positive side, the biomedical community has to realize that we are already in the Big Knowledge Age too. Curated facts from literature, either
manually or by Text/Web Mining, are stored in large repositories and integrated as structured knowledge. Dedicated software tools (e.g., DAVID, Metacore,
and Ingenuity Pathways Analysis) allow the users to search for knowledge, which could be represented in biologically meaningful ways, like pathways or networks.
Computational Intelligence (CI) methodologies, tailored to Big Data, and combined with a proper vision of living systems, e.g., as complex dynamical systems
or networks of interacting entities, could pave the way to Knowledge-Based Medicine. Precision Medicine should be viewed not only as an increase in measurement's
accuracy but also as highly accurate predictive models (Predictive Medicine), discovered from Big Data with CI tools. All the steps of the workflows from Big
Data to Big Knowledge could greatly benefit from using all CI methodologies, and this is why this special session is addressed to all of the three sections
of the WCCI 2016.
Scope and Topics
All the steps of the workflows from Big Data to Big Knowledge could greatly benefit from using all CI methodologies, and this is why this special session is
addressed to all of the three sections of the WCCI 2016. Authors are encouraged to apply CI methods and emphasize how their results could be incorporated into
the biomedical domain knowledge. Topics include, but are not limited to:
Organized by Kumarappan N
The demand for electrical energy is growing exponentially and quality and reliability requirements of modern power systems are becoming more and stringent.
This special session will focus on the applications of computational intelligence for planning, operation, control, and optimization of electric power
systems, in order to provide better secure, stable and reliable system. The computational intelligence include neural computation, evolutionary computation,
swarm intelligence, artificial immune systems, ant colony search, pattern recognition, data mining, firefly algorithm, artificial bee colony, etc.
The objective of this special session is to bring together researchers from the academia and industry in the fields of power system engineering and
computational intelligence.
Scope and Topics
The special session invites contributions in the areas including, but not limited to, the following:
Organized by Derek T. Anderson, Timothy C. Havens and Hussein Abbass
Given the rapidly changing and increasingly complex nature of global security, we continue to
witness a remarkable interest within the security and defense communities in novel, adaptive
and resilient techniques that can cope with the challenging problems arising in this domain.
These challenges are brought forth not only by the overwhelming amount of data reported by
a plethora of sensing and tracking modalities, but also by the emergence of innovative classes
of decentralized, mass-scale communication protocols and connectivity frameworks such as
cloud computing, vehicular networks, and the Internet of Things (IoT). Realizing that
traditional techniques have left many important problems unsolved, and in some cases, not
addressed, further efforts have to be undertaken in the quest for algorithms and methodologies
that can detect and easily adapt to emerging threats.
Scope and Topics
The purpose of this Special Session is to provide a forum for the exchange and discussion of
current solutions in Computational Intelligence (e.g., neural networks, fuzzy systems,
evolutionary computation, swarm intelligence, and other emerging learning or optimization
techniques) as applied to security, surveillance, and defense problems. High-quality technical
papers addressing research challenges in these areas are solicited. Papers should present
original work validated via analysis, simulation, or experimentation, pertaining, but not limited,
to the following topics:
Computational Intelligence for Advanced Architectures for Defense Operations
Organized by Mohamed Tawhid and Vimal Savsani
The use of Computational Intelligence techniques like Genetic algorithm, Particle swarm optimization, Differential Evolution, Artificial bee colony optimization, Teaching learning based optimization, Cuckoo search etc., have expand its importance for numerous engineering applications and it is considered as an important tool in the engineering field.
Scope and Topics
The aim of this special session is to reproduce the most current move on of Computational Intelligence techniques for different engineering applications to improve the overall design of the systems. This session is targeted to provide a common platform for the researchers and experts to interact and share their knowledge to take advantage of such techniques in the engineering field. Topics of interest may include, but are not limited to the following highlights:
Organized by Chung-Ming Ou and Chung-Ren Ou
Complex networks can be seen everywhere such as biology, chemistry, ecology,
economics, physics, and even the Internet. There are many achievements in complex
networks based on varied theories and models from mathematics and physics as well.
However, the structures and internal properties of complex networks which lead to the
major applications in sciences and technologies are still worth exploring. Among
them, Computational Intelligence (CI) methodologies can be greatly contributed to
solve these issues, in particular, the dynamics of complex networks. Applications of
these CI methods to model and simulate complex networks are the main focus of this
special session. The scope of the methodologies and ideas may include neural
networks, artificial immune systems, swarm intelligence, fuzzy systems and other CI
methods or general approaches from applied mathematics, physics, bio-inspired
methodologies and control theory.
Scope and Topics
Topics of interest include but are not limited to:
Organized by Paolo Cazzaniga, Daniel Ashlock, Marco S. Nobile and Daniela Besozzi
Research problems in Bioinformatics, Computational Biology and Systems Biology deal with systems at different scales of complexity and granularity (from the inference of single molecular structure to the emergent behavior of genome-wide networks), each one requiring completely different computational methods. Computational intelligence is frequently exploited to devise efficient heuristics solving problems in these disciplines; however, these approaches can be computationally challenging, limiting their applicability to real-world problems. The scope of this special session is to bring together researchers involved in the development of computational intelligence methods applied to Bioinformatics, Computational Biology and Systems Biology, specifically accelerated either by means of conventional architectures (e.g., computer clusters, GRID computing) or by unconventional technologies (e.g., Graphics Processing Units, Many Integrated Core coprocessors, biomimetic devices).
Scope and Topics
The scope of this session includes accelerated Computational Intelligence methods applied to the fields of Bioinformatics, Computational Biology and Systems Biology. Topics of interest include, but are not limited to:
Organized by Hongwei Mo and Chaoming Luo
An unmanned system(US) is a machine or device that is equipped with necessary data processing units, sensors, automatic control, and communications systems and is capable of performing missions autonomously without human intervention. Unmanned systems include unmanned aircraft, ground robots, underwater explorers, satellites, and other unconventional structures.
Computational Intelligence(CI) includes classical evolutionary computation, neural computation, fuzzy systems, swarm intelligence(Particle Swarm Optimization, Ant Colony Optimization,.etc.) and other new CI methods such as Bee colony optimization algorithms, Biogeography Based Optimization, Firefly algorithms or hybridizations of CI approaches.
Scope and Topics
This special session aims to cover all subjects of Unmanned Systems relating to the development of automatic machine systems based on CI, which include advanced technologies in unmanned hardware platforms (aerial, ground,underwater and unconventional platforms), unmanned software systems, energy systems, modeling and control, communications systems, computer vision systems, sensing and information processing, navigation and path planning and innovative application case studies.
Authors are invited to submit their original and unpublished work to this special session. Topics of interest include but are not limited to:
CI methods solving technical issues underlying the development of unmanned systems.
Organized by Yong Xu, Hale Kim, Qinghan Xiao, David Zhang and Fabio Scotti
Biometrics is a technology that focuses on using measurable human physiological or behavioural characteristics to reliably distinguish one person from others. Because of the fuzzy nature of biometrics, there are no two samples that will be perfectly identical. Computational intelligence (CI), primarily based on artificial intelligence, neural networks, fuzzy logic, evolutionary computing, etc., has been exploited to solve biometric problems with promising results. This special session intends to provide an interdisciplinary forum for researchers and practitioners, from industry, government, and academia, to share their research and experiences in the field of computational intelligence in biometrics. The focus of this special session is on innovative, new technologies designed to address important issues in the development of biometrics.
Scope and Topics
Possible topic areas include, but are certainly not limited to the following areas:
Organized by Anil Kumar, Arun Khosla, Jasbir Singh Saini and Shelly Sachdeva
The Wireless Sensor Networks (WSNs) play a vital role in our society, as they have become the archetype of pervasive technology. WSNs consist of an array of sensors of either the same or diverse types, interconnected by communication network. Fundamental design objectives of the sensor networks include reliability, accuracy, flexibility, cost, effectiveness and ease of deployment. Sensors perform routing function to create single or multi-hop wireless networking to convey data from one to other sensor nodes. The rapid deployment, self-organization and fault-tolerance characteristics of WSNs make them promising for a number of military and civilian applications.
WSN is treated as multi-modal and multi-dimensional optimization problem and addressed through Computational Intelligence involving the minimization of an objective function error.
The main aim of this Special Session is to inquire the new aspects of Computational Intelligence algorithms in WSN to minimize the computational complexity and also improve the performance.
Scope and Topics
The Special Session aims to bring researchers working in the domain of Computational Intelligence in WSN to an International forum to discuss and explore their latest findings. Topic includes, but are not limited to:
Organized by Chao-Hui Huang and Emarene M. Kalaw
Traditionally, detection, diagnosis and prognosis are based on the visual examination of histopathological
tissue samples using a microscope and they are labor intensive tasks for pathologists. E.g., The department of
pathology in a major hospital (such as Singapore General Hospital) receives about one thousand slides per day.
Thus, the emerging technologies of whole slide imaging, digital pathology and virtual microscopy application
become more popular as they provide more objective diagnosis, e.g, cancer grading, that will lead to better
prognostication. Hence, pathologist’s workload can be reduced.
In addition, the combination of Immunohistochemistry and Virtual Microscopy can be a powerful tool for
supporting pathologists’s daily work. For example, the recent research of the oncogenes, PTEN, ERG and
C-MYC, are critical in prostate carcinogenesis. However, there are still pitfalls and inter-observer variability in
manual assessment on microscope. An automated assessment will be useful for performing diagnosis.
In this session, we will bring pathologists and computer scientists together, discuss the current issues of
disease diagnosis in pathology and how the computational intelligence can help in their daily work. We will
review the recent development of digital histopathology, present novel methods that will lead to automatic and
objective cancer grading in clinical practice and discuss how do these methods of computational intelligence
may improve prognostication in the future.
Scope and Topics
The aim of this session is to provide a common platform for the researchers and experts in both fields of pathology and computational intelligence to share their knowledge. Topics of interest may include, but are not limited to:
Organized by Meng Joo Er and Ning Wang
Unmanned Surface Vehicles (USVs) are now being deployed in an array of different application areas in the commercial, naval and scientific sectors. For example,, they are currently being used for mine counter-measures, surveying and environmental data gathering. For such vehicles to be capable of undertaking the kinds of mission that are now being contemplated, they require robust, reliable, accurate and adaptable autopilot systems which allow seamless switching between automatic and manual control modes. Such properties in marine control systems are highly required to meet the changes in the dynamic behaviour of the vehicles that may occur owing to the deployment of different payloads, mission requirements and varying environmental conditions.
Modelling and control of USV has been and will be a crucial and challenging issue in both marine engineering sector and control community. Surface vehicles invariably navigate in uncertain environments with unknown disturbances from winds, waves and currents, etc. In this context, mathematical models can at most partially capture the simplified dynamics of USVs since high order hydrodynamic derivatives with respect to essential nonlinearities and external forces can hardly be obtained accurately. Even though nominal models can be derived to certain degree of accuracy, it is still a great challenge to design an effective control law.
Scope and Topics
Previous research works available in literature can be classified into two categories, i.e. model-based and approximation-based strategies. Model-based approaches require system dynamics to be at least partially known so that traditional nonlinear control laws, i.e., feedback linearization, backstepping technique, sliding-mode control (SMC), etc, can be possibly applied. Unfortunately, traditional adaptive control techniques are meaningful only for systems whose nonlinear dynamics and/or uncertainties are linear-in-the-parameters with explicitly defined regressors. Moreover, the surface vehicle dynamics inevitably suffers from complex hydrodynamics, uncertainties and unknown disturbances with respect to external environments, and thereby resulting in great difficulties in using traditional control methods.
In this special session, we will solicit latest research research results and technological know-hows from researchers from all over the world and we hope to have a very robust discussion on the recent developments and futuristic trends pertaining to intelligent control of USVs’.
The main topics of this special session include, but are not limited to, the following:
Organized by Min Jiang, Changle Zhou, Xiangxiang Zeng and Fei Chao
Effective transfer of knowledge would have significant theoretical and practical values. For example, it could enable robots to rapidly gain skills and adapt to new surroundings with relatively low computational cost. However, the path to achieving this goal is obstructed by a number of difficulties related to: computational resource limitations, uncertainties of information acquisition, and omnipresence of ambient noise. This special session aims at the presentations of the latest research activities related to all facets of knowledge and skills transfer, particularly from perspective of computational intelligence and to applications for robots. The special session is open to contributions on any topic directly or indirectly related to computational intelligence in/for knowledge transfer and skill transfer, touching on at least one of the issues mentioned above. Submissions presenting empirical or mathematical results are especially welcomed; but conceptually rigorous and innovative contributions of any kind will be seriously considered, if relevant.
Scope and Topics
Specific topics include, but are not limited to, the following:
Organized by Giovanni Acampora and Alessandro Di Nuovo
Cognitive Agents are inspired from the human cognitive capabilities to exhibit
effective behavior through perception, action, deliberation, communication, and through either
individual or social interaction with the environment. These computational agents have the ability to
function effectively in circumstances not explicitly planned when the system was designed. This
ability makes it specially suited to control complex systems, such as robotics, which are designed
to provide services and proactively interact with people and other artificial agents in composite
Human-Centric environments. Despite the tremendous potential applications of these “Cognitive
Systems” and the subsequent interest from the scientific and industrial communities, several issues
and challenges are still open.
The special session aims to attract new developments in the area of cognitive systems, such as
novel engineering principles, models and applications of cybernetic systems capable to
autonomously improve their capabilities when interacting with the environment in an open-ended
process.
As a follow-up, authors of accepted papers will be invited to submit an extended version to the
journal IEEE Transaction on Cognitive Systems and Development (new name of Trans. on
Autonomous Mental Development.)
Scope and Topics
The special session welcomes all the contributions in the area of Artificial Cognitive Systems applied to human-centric environments, with particular interest (but not limited to) the following topics:
Organized by Mufti Mahmud and Amir Hussain
The brain, being the most complex organ in human body, is specialized to process information simultaneously coming from many different sources. The neurons work as basic information processing units in the brain and interconnect to each other to form hierarchical and/or parallel pathways. These pathways are mainly involved in transforming information originated from one or more sources into either action (as in motor movements) or specialized information understood by the brain itself (as in cognitive functions).
To have a detailed and better understanding of these biological phenomena two approaches have been practiced by the research community – experimental and theoretical studies. Also, some theoretical studies are inspired by the nature itself which reframes earlier computational techniques to suggest research on biophysical basis of brain research and its information processing capabilities. Needless to say that most of these studies are results of interdisciplinary research involving medical sciences, life sciences, physical sciences, engineering, and cognitive sciences.
Scope and Topics
The focus of this special session is to address the recent advances in computationally intelligent techniques in processing neural information. Developing intelligent methods capable of deciphering brain’s information processing capability is one the biggest challenges in brain research. The objective of this special session is to provide updated information and a forum for the scientists and researchers who are looking for more relevant information in decoding brain functions using expert and computationally intelligent systems.
This special session is expected to attract papers on recent research progress in the area of intelligent computational methods in processing neural signals. The targeted research topics are, but not limited to, the following:
Organized by Chu Kiong Loo, Janos Botzheim and Naoyuki Kubota
Recently, various types of intelligent robots have been developed for the society of the next generation. In particular, intelligent robots should continue to perform tasks in real environments such as houses, commercial facilities and public facilities. The growing need to automate daily tasks combined with new robot technologies are driving the development of human-friendly robots, i.e., safe and dependable machines, operating in the close vicinity to humans or directly interacting with persons in a wide range of domains. The technology shift from classical industrial robots, which are safely kept away from humans in cages, to robots, which will be used in close collaboration with humans, requires major technological challenges that need to be overcome. A robot should have human-like intelligence and cognitive capabilities to co-exist with people. The study on the intelligence, cognition, and self of robots has a long history. The concepts on adaptation, learning, and cognitive development should be introduced more intensively in the next generation robotics from the theoretical point of view. Fuzzy, neural, and evolutionary computation play important role to realize cognitive development of robots from the methodological point of view. Furthermore, the synthesis of information technology, network technology, and robot technology may bring the brand-new emerging intelligence to robots from the technical point of view. The structurization of information and knowledge is a key topic to support the cognitive development of robots. This special session focuses on the intelligence of robots emerging from the adaptation, learning, and cognitive development through the interaction with people and dynamic environments from the conceptual, theoretical, methodological, and/or technical points of view.
Scope and Topics
The topics of interests in the special session include, but are not limited to:
Organized by Sandeep Paul, Lotika Singh and Apurva Narayan
The computational intelligence (CI) paradigm is a triumvirate of three technologies neural
networks, fuzzy logic, and evolutionary algorithms which stresses their seamless integration,
resulting in numerous important spin-off commercial applications. The integration of these
technologies has assumed various forms such as neuro-genetic, genetic fuzzy and neuro-fuzzy-
evolutionary hybrids. In an attempt to address real world complex problems specially with
high dimensional data, models with deep architectures are showing promising path for efficient
and robust solutions. Deep architecture systems have deep multi-layer structure and represents
hierarchical information which is more robust and reusable than the classical neural network
approaches.
The hybridization of fuzzy and evolutionary algorithms with such deep architecture systems
is in its nascent stage. The inherent challenges like structure and parameter learning, automatic
feature extraction and learning algorithms in deep architectures systems are yet to be fully
explored. The theoretical analysis of such deep architectures models is another area to be
worked on and is much needed.
Scope and Topics
The aim of this special session is to provide a platform to present and deliberate the recent
findings and future research directions on the existing deep architecture models, new innovative
architectures, various ways of integration of fuzzy and deep neural networks, hybrid approaches
to deep architecture systems, evolvable systems, theoretical and empirical analysis and real
world applications.
This special session will encourage researchers from academia and industry in exciting and
multi-disciplinary field of Deep Computational Intelligence. The topics for the proposed special
session include, but are not limited to:
Organized by Biliana Alexandrova-Kabadjova
The development of modern society, the technological innovation and more recently the financial crises have opened up a wide variety of new avenues for economic research, driven primary by the need of authorities for specific policy oriented studies.
The economic analysis in Financial Market Infrastructures is one of this new policy-oriented fields. Financial Market Infrastructures (FMIs) are economic platforms built with the purpose to facilitate the clearing, settlement, and recording of monetary and other financial transactions. The prime elements that now a day form part of the FMIs are payment systems (PS), central securities depositories (CSD), securities settlement systems (SSS), central counterparties (CCP), and trade repositories (TRs). The Principles for Financial Market Infrastructures1 (the Principles), are the pillars of operational rules with the aim of guaranteeing the stability of financial markets and increasing the efficiency of these economic platforms.
Aftermath of the financial crisis we have witnessed multifaceted regulatory efforts around the world. In order to guaranty stability, much attention has been paid to additional bank’s requirements (at the micro-level) and the so-called macro-prudential approach. However, in order to form a proper systemic analysis of the financial industry the level of financial market infrastructures (FMIs) deserves attention in its own right. It was referred by Dr. Martin Diehl as the backbone of the financial system (Diehl, 2016). FMIs are in charge of clearing, settlement and recording of transactions. Studying the economics of FMIs differs from both the micro- and the macro-level as on one side those infrastructures are transactional systems with many participants having direct access to them and forming rich network of interactions. On the other side FMIs are single institutions that are connected through the overlapping set of participants having access to them and through the funds of transfers that flow from one FMI to another. FMIs are themselves complex systems whose operations cannot be modelled analytically, but in order to be studied properly, researchers have to applied complex models. FMIs serve different financial institutes and very often are of systemic importance to the whole financial system, as they determine the efficiency, stability and overall the reliability of financial industry. This field is becoming one of the key channels for conducting macro-prudential policy. Nevertheless very few specialists worldwide are doing research and defining the agenda for the future.
Under this scenario, a field that has proven to be inherently cross-country and intra-disciplinary, the aim of the special session is to promote research on the applications of computational intelligence and studies aimed at sharing insights on best practices for analyzing FMIs.
Scope and Topics
In order to implement stress testing, monitoring and early warning indicators, testing new policy and operational rules topics of interest include, but are not limited to:
Organized by Raymond Chiong Yukun Bao Manuel Chica and Sergio Damas
Computational intelligence has a long history of applications to marketing and plays an important role in establishing the interdisciplinary pool of methodologies employed in marketing science research. For example, evolutionary algorithms, artificial neural networks, support vector machines and fuzzy logic have been used in demand forecasting, direct marketing and cross selling, among others. Expert systems have been used for decision support in brand management, and data mining has become a core component of customer relationship management in marketing. Likewise, the use of computational intelligence in social science research allows heightened understanding of the dynamics of complex systems. Agent-based modelling, using agents whose intelligence includes full-blown creativity thanks to their ability to learn and to adapt, is revealing information about such systems that has never before been supported.
The purpose of this special session is to bring together the computational intelligence community as well as researchers from marketing and social sciences to set up visions on how state-of-art computational intelligence techniques can be and are used for insightful marketing and social science analysis, and how marketing and social scientists can contribute in promoting new applications with computational intelligence.
Scope and Topics
We invite submissions of original, previously unpublished papers with topics on, but not limited to, the following:
Technical issues include (but not limited to)
Organized by Tomohiro Yoshikawa and Yoichiro Maeda
This special session aims at discussing the basic principles and methods of designing intelligent interaction with the bidirectional communication based on the effective collaboration and symbiosis between the human and the artifact, i.e. robots, agents, computer and so on.
We aims at encouraging the academic and industrial discussion about the research on Human-Agent Interaction (HAI), Human-Robot Interaction (HRI), and Human-Computer Interaction (HCI) concerning Symbiotic Systems. Reflecting the fact that this society covers a wide range of topics, in this session we invite the related researchers from a variety of fields including intelligent robotics, human-machine interface, Kansei engineering and so on.
Scope and Topics
Topics of interest include, but are not limited to theory and application of:
Organized by Asim Roy, Plamen Angelov, Marley Vellasco, Adel Alimi, G. Kumar Venayagamoorthy, Juyang Weng, Leonid Perlovsky and De-Shuang Huang
The aim of this special session is to promote new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep learning, nature-inspired and computational intelligence approaches), implementations on different computing platforms (e.g. neuromorphic, GPUs, clouds, clusters) and applications of big data to solve real-world problems (e.g. weather prediction, transportation, energy management).
Scope and Topics
Topics of interest include, but are not limited to theory and application of:
Organized by Simon Fong, Xin-she Yang, Thomas Hanne and Sabah Mohammed
Hybridizing techniques embrace the advantages of more than any one of them alone. On one hand metaheuristic approaches are general strategies for guiding heuristic procedures usually for improving the efficiency of optimization methods. Recently there have been strong momentums centered on metaheuristics research in computer science communities especially those of evolutionary computing, swarm intelligence which taps on the power of collective and bio-inspired collaborative behaviours for distributed search. Many contemporary algorithms and their applications to solve computationally intensive problems have emerged, ranging from swarm intelligence methods inspired by bee pollination to wolf-pack hunting.
While novel metaheuristics are being developed from time to time, hybrid versions of them across the data mining techniques are not uncommon. Hybridization comes in two major directions – data mining approaches are combined within an optimization process, and vice-versa. In the first case, data mining is the core function with the objective of analyzing data and revealing the hidden patterns, just as if it is in its original form. The data mining function is wrapped by the iterative process of optimization, driven by some metaheuristics for the sake of stochastically finding the optimal data mining result out of many possible runs. On the other hand, data mining methods are used as a part of the heuristic search in the metaheuristics. In this case, search patterns and search directives are learnt from data mining the past trials during the optimization process. So the heuristic searches are better guided by incorporating the knowledge learnt from the heuristic trails; hence it enhances the optimization results at the end.
The main focus of this special session is to investigate new methods and desirable properties of the new hybrids resulted from combining metaheuristics and data mining, either in metaheuristics wrapping data mining or data mining enhancing the metaheuristic searches. Most metaheuristics strategies have already been applied to data mining tasks but there are still open research lines to improve their usefulness.
Scope and Topics
This special session is intended to serve as a platform for exchanging the latest progresses along these two types of hybridization of the two important computational techniques. Sharing of experiences of applications using hybrid metaheuristics and data mining are encouraged too, both from academia and industries in the following theoretical and application areas (but not limited):
Algorithms and metaheuristics
The IEEE WCCI 2016 will offer a number of tutorials aimed at researchers, students and practicing professionals. All tutorials will be held on 24 July 2016. Traditionally, tutorials attract a broad range of audiences, including professionals, researchers from academia, students, and practitioners, who wish to enhance their knowledge in the selected tutorial topic. Tutorials offer a unique opportunity to disseminate in-depth information on specific topics in computational intelligence.
IEEE WCCI 2016 Tutorials will be held at the Vancouver Convention Centre (West Building) on Sunday, 24 July 2016. It is free of charge for registered participants of the Congress. The tutorial schedule is given below:
Due to free tutorials, number of seats for each tutorial will be limited to its capacity. They will be given at the first come first take basis. Please arrive early to assure a seat.
IJCNN 2016 Tutorials
FUZZ-IEEE 2016 Tutorials
IEEE CEC 2016 Tutorials
Organized by Leszek Rutkowski
In recent years data stream mining became a very challenging and widely studied issue in computer science community. This topic is
being developed as a response to the substantial growth of data amount which needs to be processed and analyzed in many fields of human
activity. Among many others this includes for example traffic control, network monitoring, fraud detection in bank transactions and issues
associated with image recognition like robotic vision or object tracking.
Data streams are potentially infinite sequences of data elements, which arrive to the system often with very high rates. Therefore the standard
data mining algorithms used for static data are not directly applicable in this field. They require significant modifications beforehand or
totally new dedicated algorithms have to be developed. In this tutorial we will review existing data stream mining algorithms and present new
results recently published in [1-4]. The tutorial will consist of 10 parts:
Biography
Leszek Rutkowski received the MSc and PhD degrees from the Technical University of Wroclaw, Poland, in 1977 and 1980, respectively. Since 1980, he has been with the Technical University of Czestochowa, where he is currently a professor and director of the Institute of Computational Intelligence. From 1987 to 1990, he held a visiting position at the School of Electrical and Computer Engineering, Oklahoma State University. His research interests include data stream mining, big data analysis, neural networks, fuzzy systems, computational intelligence, pattern classification, and expert systems. He has published more than 200 technical papers, including 22 in various series of IEEE Transactions. He is the author of the following books: Computational Intelligence (Springer, 2008), New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing (Springer, 2004), Flexible Neuro-Fuzzy Systems (Kluwer Academic, 2004), Methods and Techniques of Artificial Intelligence (2005, in Polish), Adaptive Filters and Adaptive Signal Processing (1994, in Polish), and coauthor of two others (1997 and 2000, in Polish) Neural Networks, Genetic Algorithms and Fuzzy Systems and Neural Networks for Image Compression. He is the president and founder of the Polish Neural Networks Society. He was an associate editor of the IEEE Transactions on Neural Networks (1998-2005) and IEEE Systems Journal (2007-2010). He is an editor-in-chief of the Journal of Artificial Intelligence and Soft Computing Research, and he is on the editorial board of the IEEE Transactions on Cybernetics, International Journal of Neural Systems, International Journal of Applied Mathematics and Computer Science and International Journal of Biometric. He is a recipient of the IEEE Transactions on Neural Networks 2005 Outstanding Paper Award. He served in the IEEE Computational Intelligence Society as the chair of the Distinguished Lecturer Program (2008-2009) and the chair of the Standards Committee (2006-2007). He is the founding chair of the Polish chapter of the IEEE Computational Intelligence Society, which won the 2008 Outstanding Chapter Award. In 2004, he was elected as a member of the Polish Academy of Sciences. In 2004, he was awarded by the IEEE Fellow membership grade for contributions to neurocomputing and flexible fuzzy systems.
Organized by Nathan Scott and Nikola Kasabov
This tutorial introduces spiking neural networks (SNN), their methods, implementations and applications. SNNs use principles of information processing, also characteristic of the brain. Information is represented in the form of many sequences of spatio-temporal potentials (spikes) that are transferred between many neurons through connections. When applied for data modelling SNNs have the potential of compact representation of space and time, fast information processing, time-based and frequency-based information representation, efficient learning and generalisation on complex data, predictive spiking activity that can trigger in advance a necessary response. SNNs can revolutionise computing in general and that is why SNN have been chosen as the main information processing paradigm for the development of new computing, neuromorphic systems in the EU Human Brain Project, the USA Brain project, and others. The tutorial will include materials and demonstrations organized in three parts:
Biography
Nikola Kasabov
Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand and DVF of the Royal Academy of Engineering, UK. He is the Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland. He holds a Chair of Knowledge Engineering at the School of Computing and Mathematical Sciences at Auckland University of Technology. Kasabov is a Past President and Governor Board member of the International Neural Network Society (INNS) and also of the Asia Pacific Neural Network Assembly (APNNA). He is a member of several technical committees of IEEE Computational Intelligence Society and a Distinguished Lecturer of the IEEE CIS (2012-2014) He is a Co-Editor-in-Chief of the Springer journal Evolving Systems and has served as Associate Editor of Neural Networks, IEEE TrNN, IEEE TrFS, Information Science, J. Theoretical and Computational Nanosciences, Applied Soft Computing and other journals. Kasabov holds MSc and PhD from the TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 600 publications that include 15 books, 180 journal papers, 80 book chapters, 28 patents and numerous conference papers. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia, University of Essex, University of Otago, Advisor-Pofessor at the Shanghai Jiao Tong University, Guest Professor at ETH/University of Zurich. Prof. Kasabov has received the APNNA ‘Outstanding Achievements Award’, the INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’, the EU Marie Curie Fellowship, the Bayer Science Innovation Award, the APNNA Excellent Service Award, the RSNZ Science and Technology Medal, and others. He has supervised to completion 38 PhD students.
Nathan Scott
Nathan Scott is a Postdoctoral Fellow KEDRI, researching the theory and practice of SNN and neuromorphic an systems. He holds a PhD in Computer Science, Bachelor of Computer and Information Sciences (First Class Honours), BCIS (Software Development) and BBus degrees from Auckland University of Technology. Nathan is an AUT Vice Chancellor's Scholar, recipient of Top Graduate awards, the Dean's highest achievement award and of other study awards. He is a member of the IEEE CIS and SigProc Societies, and a member of the IEEE CIS Neural Networks Task Force on Education. He has given a number of invited talks internationally, including tutorials at IJCNN and ICONIP conferences, and IEEE Summer Schools, and chaired a number of conference Special Sessions on SNN. He currently teaches undergraduate courses in computer graphics and embedded computing.
Organized by Seiichi Ozawa
Increasing the maliciousness and the diversity of cyber-attacks is one of the most concerned issues in recent years. There are various
kinds of cyber-threads such as malware infection, DDoS attacks, probing to find security vulnerability, phishing, and spam mails to lure
malicious web site, which intend to steal money/important information and to stop/disturb public services, etc. All these attacks are conducted
via communication on the Internet, in which unstructured information is delivered or broadcasted in the form of packet data. Therefore, to utilize
machine learning in detection, classification, and prediction of cyber-attacks, we need to consider how unstructured data streams should be formulated
as structured data that are fit for machine learning schemes. Because such unstructured data often have no class label, no information on useful
features, and no available training set in advance, which pose us big hurdles in applying machine learning methods. In many cases, a solution to the
above issues is problem-dependent. However, there might be some rules of thumb in the process of learning from unstructured data streams efficiently
and effectively. I hope to share my experiences on the above topics with the audience.
This tutorial includes the following topics in cybersecurity:
Biography
Seiichi Ozawa received the B.E. and M.E. degrees in instrumentation engineering from Kobe University in 1987 and 1989, respectively. In 1998, he received his Ph.D. degree in computer science from Kobe University. He is currently a professor with the Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, Kobe, Japan. From March 2005 to February 2006, he was a visiting researcher at Arizona State University. His current research interests are machine learning, incremental learning, online feature extraction, pattern recognition, and cyber security. He has published more than 124 journal and refereed conference papers, and 8 book chapters/monographs. He is currently an associate editor of IEEE Trans. on Neural Networks and Learning Systems, Evolving Systems, and Pattern Analysis and Applications Journal. He is also serving as a vice-president of Japan Neural Network Society (JNNS) and board members of Asia Pacific Neural Network Assembly (APNNA) and the Institute of Systems, Control and Information Engineers (ISCIE). He is a member of neural networks technical committee of IEEE Computational Intelligence Society. He is currently working as a General Co-Chair of ICONIP2016 in Kyoto, Japan, and served as the special session chairs of ICONIP2013 and WCCI2014, and also served as the technical committee members of many international conferences.
Organized by Cesare Alippi and Manuel Roveri
Many real-world machine learning applications assume the stationarity hypothesis for the process generating the data. This assumption guarantees that the model
learnt during the initial training phase remains valid over time and that its performance is in line with our expectations. Unfortunately, this assumption does
not truly hold in the real world, in many cases representing only a simplistic approximation.
Current research in machine learning aims at removing/weakening the stationary assumption so that time variance is detected as soon as possible and suitable
actions activated afterwards. In this direction, the literature addressing the learning in nonstationary environments classifies existing approaches as passive
or active depending on the learning mechanism adopted to deal with the process evolution. Passive approaches rely on a continuous adaptation of the application
without explicitly knowing whether a change has occurred or not, while, in active approaches, triggering mechanisms, e.g., Change Detection Tests (CDTs) or
Change Point Methods (CPMs), are considered to detect a change in the process generating the data. Once the change has been detected the application might
require (self) adaptation to track the system evolution.
The tutorial will introduce and contrast passive and active approaches by providing those details the scholar and the practitioner need to be able to design
machine learning applications working in nonstationary environments.
Biography
Cesare Alippi
Cesare Alippi received the degree in electronic engineering cum laude in 1990 and the PhD in 1995 from Politecnico di Milano, Italy. Currently, he is a Full
Professor of information processing systems with the Politecnico di Milano. He has been a visiting researcher at UCL (UK), MIT (USA), ESPCI (F), CASIA (RC), USI (CH).
Alippi is an IEEE Fellow, Vice-President education of the IEEE Computational Intelligence Society (CIS), Board of Governors member of the International Neural Networks
Society, Associate editor (AE) of the IEEE Computational Intelligence Magazine, past AE of the IEEE-Tran. Neural Networks (2005-2012), IEEE-Trans Instrumentation
and Measurements (2003-09) and member and chair of other IEEE committees. He was awarded the 2016 IEEE CIS Outstanding Transaction on Neural networks and Learning
Systems award, the 2013 IBM Faculty award; the 2004 IEEE Instrumentation and Measurement Society Young Engineer Award; in 2011 has been awarded Knight of the Order
of Merit of the Italian Republic. Current research activity addresses adaptation and learning in non-stationary environments and Intelligence for embedded systems.
Manuel Roveri
Manuel Roveri received the Dr.Eng. degree in Computer Science Engineering from the Politecnico di Milano (Milano, Italy) in June 2003, the MS in Computer Science from the University of Illinois at Chicago (Chicago, Illinois, U.S.A.) in December 2003 and the Ph.D. degree in Computer Engineering from Politecnico di Milano (Milano, Italy) in May 2007. Currently, he is an associate professor at the Department of Electronics and Information of the Politecnico di Milano. He has been visiting researcher at Imperial College London (UK). Manuel Roveri is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems and served as chair and member in many IEEE subcommittees. He received the 2016 IEEE CIS Outstanding Transaction on Neural networks and Learning Systems award. He is the co-organizer of the IEEE Symposium on Intelligent Embedded Systems in 2014 and organizer and co-organizer of workshops and special sessions at IEEE-sponsored conferences. Current research activity addresses adaptation and learning in non-stationary environments and intelligence for embedded systems and cognitive fault diagnosis.
Organized by Leonid I. Perlovsky
The presentation focuses on mathematical models of the fundamental principles of the mind-brain neural mechanisms and practical applications
in several fields. Physics of the mind is an extension of neural networks towards more realistic modeling of the mind from perception to the entire
mental hierarchy including higher cognition and emotion. Big data and autonomous learning algorithms are discussed for cybersecurity, gene-phenotype
associations, medical applications to disease diagnostics, financial predictions, data mining in distributed data bases, learning of patterns under
noise, interaction of language and cognition in mental hierarchy. Mathematical models of the mind-brain are discussed for mechanisms of concepts,
emotions, instincts, behavior, language, cognition, intuitions, conscious and unconscious, abilities for symbols, functions of the beautiful and musical
emotions in cognition and evolution. This research won National and International Awards.
A mathematical and cognitive breakthrough, dynamic logic is described. It models cognitive processes “from vague and unconscious to crisp and conscious,”
from vague representations, plans, thoughts to crisp ones. It resulted in more than 100 times improvements in several engineering applications; brain
imaging experiments at Harvard Medical School, and several labs around the world proved it to be a valid model for the brain-mind processes. New cognitive
and mathematical principles are discussed, language-cognition interaction, function of music in cognition, and evolution of cultures. How does language
interact with cognition? Do we think using language or is language just a label for completed thoughts? Why the music ability has evolved from animal cries
to Bach and Lady Gaga? The presentation briefly reviews past mathematical difficulties of computational intelligence and new mathematical techniques of
dynamic logic and neural networks implementing it, which overcome past limitations.
The presentation discusses cognitive functions of emotions. Why human cognition needs emotions of beautiful, music, sublime. Dynamic logic is related to
knowledge instinct and language instinct; why are they different? How languages affect evolution of cultures. Language networks are scale-free and small-world,
what does this tell us about cultural values? What are the biases of English, Spanish, French, German, Arabic, Chinese; what is the role of language in cultural
differences?
Relations between cognition, language, and music, are discussed. Mathematical models of the mind and cultures bear on contemporary world, and may be used to
improve mutual understanding among peoples around the globe and reduce tensions among cultures.
Biography
Dr. Leonid Perlovsky is Professor of Psychology Northeastern University, CEO LPIT, past Visiting Scholar at Harvard University School of Engineering and Applied Science, Harvard University Medical School, Principal Research Physicist and Technical Advisor at the Air Force Research Laboratory (AFRL). He led research projects on neural networks, modeling the mind and cognitive algorithms for integration of sensor data with knowledge, multi-sensor systems, recognition, fusion, languages, music cognition, and cultures. As Chief Scientist at Nichols Research, a $0.5B high-tech organization, he led the corporate research in intelligent systems and neural networks. He served as professor at Novosibirsk University and New York University; as a principal in commercial startups developing tools for biotechnology, text understanding, and financial predictions. His company predicted the market crash following 9/11 a week before the event. He is invited as a keynote plenary speaker and tutorial lecturer worldwide, including most prestigious venues, like Nobel Forum, published more than 500 papers, 17 book chapters, and 5 books, including “Neural Networks and Intellect,” Oxford University Press, 2001 (currently in the 3rd printing) and “Cognitive Emotional Algorithms” Springer 2011. Dr. Perlovsky participates in organizing conferences on Neural Networks, CI, Past chair of IEEE Boston CI Chapter; serves on the Editorial Board for ten journals, including Editor-in-Chief for “Physics of Life Reviews”, IF=9.5, T-R rank #4 in the world, on the INNS Board of Governors, a past chair of the INNS Award Committee. He received National and International awards including the Gabor Award, the top engineering award from the INNS; and the John McLucas Award, the highest US Air Force Award for basic research.
Organized by Zeng-Guang Hou
Stroke, traumatic brain injury (TBI), and spinal cord injury (SCI) are most important reasons that cause nervous system damage, and thus lead to physical
disabilities. Because of long process of nervous system recovery, the patients would suffer permanent disability without effective treatment and
rehabilitation. At present, physical therapy, occupational therapy and exercise therapy are most popular clinical treatments for rehabilitation, and they
have been proven helpful to the recovery of patients’ nervous system and limb functions. However, the vast majority of rehabilitation hospitals still carry
out the above treatments manually or using simple rehabilitation medical devices. For traditional exercise therapy, most hospitals still use simple cycling
devices for passive training which is very limited because of single training mode and fixed training trajectory of such machines. Since the training process
for patients of neurological damage is repetitive, it is expected to improve the current status of rehabilitation by using robotics, and also it would
accelerate the rehabilitation process for patients and reduce therapists’ labor intensity. We will mainly address the system design of a reclining type
rehabilitation robot for lower limbs, and also studied the passive training, active training and assistance training control methods for the needs of neurological
rehabilitation and motor function of lower limbs for SCI or stroke patients.
Biography
Dr. Hou received the B.E. and M.E. degrees in electrical engineering from Yanshan University (formerly Northeast Heavy Machinery Institute), Qinhuangdao,
China, in 1991 and 1993, respectively, and the Ph.D. degree in electrical engineering from Beijing Institute of Technology, China, in 1997. From July 1999
to May 2004, he was an Associate Professor with the State Key Laboratory of Management and Control of Complex Systems, Institute of Automation, Chinese
Academy of Sciences, where he has been a full Professor since June 2004, and the Deputy Director of the Laboratory since 2006. From September 2003 to October
2004, he was a Visiting Professor at the Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, SK, Canada.
His current research interests include neural networks, robotics for rehabilitation and minimally invasive surgery, and intelligent control systems. He has
published over 100 papers in referred journals and conference proceedings. He has over 20 patents. He is the recipient of the YangJiaChi Award by the Chinese
Automation Society in 2010, Distinguished Graduate Student Supervisor Award by Chinese Academy of Sciences in 2010, and Excellence Youth Funds by the Natural
Science Foundation of China in 2012.
Dr. Hou currently serves as an Associate Editor of IEEE Transactions on Cybernetics, Neural Networks, Acta Automatica Sinica, and Control Theory and Applications.
He served IEEE WCCI as the Publicity Chair in Vancouver in 2006, the Publication Chair Hong Kong in 2008, and the Local Arrangement Chair in Beijing in 2014.
Organized by Péter Érdi
The network of patents connected by citations is an evolving graph, which represents the innovation process. A patent citing another implies that the cited patent reflects a piece of previously existing knowledge that the citing patent builds upon. It will be explained why ad how to use specific algorithms to extract relevant information about the patent citation network. The tutorial consists of three parts:
Biography
Péter Érdi serves as the Henry R Luce Professor of Complex Systems Studies at Kalamazoo College. He also has a position of a research prfoessor at Wigner Research Centre for Physics, Hungarian Academy of Sciences in Budapest. In addition, he is the founding director of the BSCS (Budapest Semester in Cognitive Science), a study abroad program. He is a member of the Board of Governors of the International Neural Network Society, some other committees, and serves now also as the Editor-in-Chief of the Cognitive Systems Research. PE's main scientific field is computaional neuroscience, but he is also active on the field of computational social science and other areas of complex systems research.
Organized by Roberto Tagliaferri
Multi-view learning is concerned with the problem of machine learning from data represented by multiple distinct feature sets. The recent emergence of this learning mechanism is largely motivated by the property of data from real applications where examples are described by different feature sets or different views, for example: Bioinformatics (microarray gene expression, RNASeq, PPI, gene ontology, etc.); Neuroinformatics (fMRI, DTI); Internet of Things; Web Mining.
In 2013 Sun proposed a Multi View Learning Taxonomy in which there were several main issues: Dimensionality Reduction; Semi Supervised learning; Supervised learning; Clustering; Active Learning; Ensemble Learning, Transfer Learning. Orthogonal to this approach one can analyze the Multi-View learning paradigm with respect to the steps when data is integrated: Early Integration; Intermediate Integration; Late Integration. Each view is analyzed on its own and the results are then fused together.
The tutorial is divided into two parts:
Biography
From 1986 to 2000 Roberto Tagliaferri was researcher of Computer Science and Cybernetics, from March 2000 to October 2006 was Associate Professor of Computer Science and since November 1 2006 has been full Professor at the Faculty of Sciences, University of Salerno.
He co-organized Italian and international workshops on Neural Networks and Bioinformatics from 1988. He was co-editor of the proceedings of WIRN from 1995 to 2005. He has been co-editor of special issues on international journals: Neural Networks (2003), International Journal of Approximate Reasoning (2008), Artificial Intelligence in Medicine (2009), International Journal of Knowledge Engineering and Soft Data Paradigms (2010), BMC Bioinformatics (2015). He is Associate Editor of the IEEE Transactions on Cybernetics and of Source code in Medicine and Biology of Biomed Central. He is senior member of the IEEE "Computational Intelligence" and "System, Man and Cybernetics" societies and of INNS. He is chair of the Italian Chapter of the IEEE CIS. He is author of more than 180 scientific publications, includ9ing more than 60 papers on International Journals, co-editor of 17 Proceedings, H-index 23 and i10-index 40 on Google Scholar.
Organized by Robert Kozma
This self-contained tutorial reviews the intensively developing field of network science and graph theory for researchers working in the field of neural networks and computational intelligence, who are interested in gaining insight into recent progress in the field. We introduce theoretical foundations and computational modeling tools to interpret large-scale brain imaging data. Neural and cognitive networks in the brain are viewed as large-scale graphs, which evolve in time in response to the dynamically changing environmental conditions. Results of the interpretation of brain imaging data are used to design more intelligent computational devices and autonomous robots.
Graph theoretical approaches have been extremely useful in the past 20 years to describe structural and functional properties of large-scale networks, including the world-wide-web, social networks, ecological networks, biological systems, etc. Our focus here is on neural systems, in particular on brain networks. Functional connections between cortical areas are described using various brain imaging tools, including fMRI, MEG, EEG, and ECOG. These results show breakthroughs in our understanding of brain networks and dynamics, and produce details of the connectome.
Neural correlates of higher cognitive functions are described as the result of brain imaging experiments, which reveal mechanisms of intelligent behavior and the occurrence of moments of deep insight, the "aha" moment. Graph theoretical tools are employed to interpret experimental findings with brain imaging, and to contribute to better understanding of normal and abnormal brain conditions. Specific application areas include intelligent brain-computer interfaces with biofeedback modalities to relieve stress conditions and to enhance relaxation, as well as help the disabled and the elderly.
This tutorial addresses the following topics:
Biography
Robert Kozma (Fellow of IEEE, Fellow of INNS) is Professor of Mathematical Sciences and Director of the Center of Large-Scale Integration and Optimization Networks (CLION), the University of Memphis, TN, USA. Research in his CLION is focused on developing advanced optimization techniques based on biologically motivated and cognitive principles of large-scale networks. He has published 8 books, 300+ papers, has 3 patent disclosures. His research has been supported by NSF, NASA, JPL, AFRL, DARPA, FedEx, and by other agencies. He is President-Elect (2016) of the International Neural Network Society, and serves on the Board of Governors of IEEE Systems, Man, and Cybernetics (SMC, 2016-2018). Previously, he has served on the AdCom of the IEEE Computational Intelligence Society (2009-2012) and the Board of Governors of the International Neural Network Society (2007-2012). He has been General Chair of IJCNN2009, Atlanta, USA. He is Associate Editor of Neural Networks, Neurocomputing, Cognitive Systems Research, and Cognitive Neurodynamics. Dr. Kozma is the recipient of the INNS “Gabor Award” (2011); the “Alumni Association Distinguished Research Achievement Award” (2010), and he has been a “National Research Council (NRC) Senior Fellow” (2006-2008).
Dr. Kozma holds a Ph.D. in Physics (Delft, The Netherlands, 1992), two M.Sc. degrees (Mathematics, Budapest, Hungary, 1988; Power Engineering, Moscow, Russia, 1982). He worked as Research Fellow at the Hungarian Academy of Sciences, Budapest, Hungary. He has been on the faculty of Tohoku University (Japan), Otago University (New Zealand), and had a joint appointment with the Division of Neurobiology and the EECS at UC Berkeley, and has held visiting positions at NASA/JPL, Sarnoff Co., Princeton, NJ; Lawrence Berkeley Laboratory (LBL); and AFRL, Dayton, OH.
Organized by Isaac Triguero and Francisco Herrera
Abstract
In the era of the information technology, the problem of managing big data applications is becoming the main focus of attention in a wide
variety of disciplines such as science, business, industry, etc, because of enormous increment of data generation and storage that has taken
place in the last years. Analyzing and extracting knowledge from such volumes data becomes a very interesting and challenging task for most of
the standard computational intelligence techniques that may not be properly adapted to the new space and time requirements. Thus, we must consider
new paradigms to develop scalable algorithms.
The paradigm MapReduce introduced by Google allows us to carry out the processing of large amounts of information. Its open source implementation,
named Hadoop, led the development of a popular platform with a wide use. Recently, new frameworks as Apache Spark are emerging. Different machine
learning libraries are developed for these frameworks, such as Mahout (Hadoop) and MLlib (Spark).
In this tutorial we will provide a gentle introduction to the problem of big data, including a formal definition and the issues that it brings to
the society today, as well as the presentation of recent technologies (Hadoop ecosystem, Spark). Then, we will dive into the field of big data analytics,
explaining the challenges that come to computational intelligence techniques and introducing machine learning libraries such as Mahout and MLlib. Afterwards,
we will go across two of the main topics of the WCCI 2016: fuzzy modeling and evolutionary models in the big data context. Some big data cases of study will
be presented for evolutionary feature selection/weighting and fuzzy rule learning, including the associated software.
Table of contents:
Biography
Issac Triguero
Isaac Triguero received the M.Sc. and Ph.D. degree in Computer Science from the University of Granada, Granada, Spain, in 2009 and 2014, respectively. He is currently post-doctoral researcher at the Inflammation Research Center of the Ghent University, Ghent, Belgium. His research interests include data mining, data reduction, biometrics, evolutionary algorithms, semi-supervised learning, bioinformatics and big data learning.
Francisco HerreraFrancisco Herrera is a Professor in the Department of Computer Science and Artificial
Intelligence at the University of Granada, Spain. He has been the supervisor of 38 Ph.D. students. He has published more than 300 journal papers
(H-index 101) that have received more than 36000 citations (Scholar Google). He is co-author of the books "Genetic Fuzzy Systems" (World Scientific,
2001) and "Data Preprocessing in Data Mining" (Springer, 2015).
He currently acts as Editor in Chief of the international journals "Information Fusion" (Elsevier) and “Progress in Artificial Intelligence (Springer).
He acts as editorial board member of a dozen of journals, among others: International Journal of Computational Intelligence Systems, IEEE Transactions
on Fuzzy Systems, IEEE Transactions on Cybernetics, Information Sciences, Knowledge and Information Systems, Fuzzy Sets and Systems, Applied Intelligence,
Knowledge-Based Systems, Memetic Computing, and Swarm and Evolutionary Computation.
He is a Fellow of the European Coordinating Committee for Artificial Intelligence and the International Fuzzy Systems Association. He has been given
many awards and honors for his personal work or for his publications in journals and conferences. His areas of interest include, among others, data
science, data preprocessing, cloud computing and big data.
Organized by Hamid Tizhoosh
Abstract
In this tutorial we will talk about the state of the art of fuzzy algorithm in machine learning. In the first part, we will review fuzzy algorithms when they
are applied on typical machine-learning taks such as search, classification, approximation and learning. In the second part, the relationship between fuzzy methods
and other machine-learning approaches are reviewed whereas hybrid schemes will be in foreground. In both parts, relevant literature will be reviewed. Matlab examples
will be executed to display the effect of major methods for relevant applications such as data mining, signal processing, image analysis, and big data. Links to
online resources will be included in the material which also contains the source codes and the presentation slides.
Structure of the tutorial:
Biography
Dr. Hamid Tizhoosh received the MSc degree in electrical engineering with a major in computer science from University of Technology, Aachen, Germany
, in 1995. From 1993 to 1996, he worked at Management of Intelligent Technologies Ltd., Aachen. Germany in the field of industrial image processing.
Dr. Tizhoosh received his PhD degree from University of Magdeburg, Germany, in 2000 with the subject of fuzzy processing of medical images. Dr. Tizhoosh
was active as the scientist in the engineering department of IPS(Image Processing Systems Inc., now Photon Dynamics), Markham, Canada, until 2001. For six months,
he visited the Knowledge/Intelligence Systems Laboratory, University of Toroanto, Canada.
Since September 2001, Dr. Tizhoosh is a faculty member at the Deparment of Systems Design Engineering, University of Waterloo, Canada. At the same time, he has
been the Chief Technology and Chief Executive Officer of Segasist Technologies, a software company (Toronto, Canada) developing innovative software for medical
image analysis. His research encompasses machine learning, fuzzy logic and computer vision. Dr. Tizhoosh has extensive experience in medical imaging including
portal (megavoltage) imageing, x-rays, MRI and ultrasound. He has been a member of the European Union Projects INFOCUS and ARROW for radiation therapy to improve
the integration of online images within the treatment planning of cancer patients. Dr. Tizhoosh has extensively published on fuzzy techniques in image processing.
He is the author of two books, 14 book chapters, and more than 100 journal/conference papers.
Organized by Chang-Shing Lee Giovanni Acampora and Yuandong Tian
Abstract
Many different real-world applications with a high-level of uncertainty proved the good performance of the type-2 fuzzy sets (T2 FSs). In this tutorial, we will present three read-world applications, including, game-playing, dietary assessment, and IRT-based e-learning based on type-2 fuzzy ontology and fuzzy markup language. Below is their brief descriptions:
Biography
Chang-Shing Lee (SM’09) received the Ph.D. degree in Computer Science and Information Engineering from the National Cheng Kung University, Tainan,
Taiwan, in 1998.
He is currently a Professor with the Department of Computer Science and Information Engineering, National University of Tainan, where he is the Dean
of Research and Development Office from 2011 to 2014. His current research interests include adaptive assessment, intelligent agent, ontology applications,
Capability Maturity Model Integration (CMMI), fuzzy theory and applications, and machine learning. He also holds several patents on Fuzzy Markup Language
(FML), ontology engineering, document classification, image filtering, and healthcare.
He was the Emergent Technologies Technical Committee (ETTC) Chair of the IEEE Computational Intelligence Society (CIS) from 2009 to 2010 and the ETTC Vice-Chair
of the IEEE CIS in 2008. He is also an Associate Editor or Editor Board Member of International Journals, such as IEEE Transactions on Computational Intelligence
and AI in Games (IEEE TCIAIG), Applied Intelligence, Soft Computing, Journal of Ambient Intelligence & Humanized Computing (AIHC), International Journal of Fuzzy
Systems (IJFS), Journal of Information Science and Engineering (JISE), and Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII).
He also guest edited IEEE TCIAIG, Applied Intelligence, Journal of Internet Technology (JIT), and IJFS.
Prof. Lee was awarded the outstanding achievement in Information and Computer Education & Taiwan Academic Network (TANet) by Ministry of Education of Taiwan in
2009 and the excellent or good researcher by National University of Tainan from 2010 to 2013. Additionally, he also served the general chair of The 2015 Conference
on Technologies and Applications of Artificial Intelligence (TAAI 2015), general co-chair of 2015 IEEE Conference on Computational Intelligence and Games (IEEE CIG
2015), the program chair of the 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), and the competition chair of the FUZZ-IEEE 2013. He is also
a member of the Program Committees of more than 50 conferences. He is a senior member of the IEEE CIS, a member of the Taiwanese Association for Artificial
Intelligence (TAAI), and the Software Engineering Association Taiwan. He was a member of the standing committee of TAAI from 2011 to 2014 and one of the standing
supervisors of Academia-Industry Consortium for Southern Taiwan Science Park from 2012 to 2013.
Dr. Giovanni Acampora (Senior Member, IEEE) received the Laurea (cum laude) and Ph.D. degrees in computer science from the University of Salerno, Salerno, Italy, in 2003 and 2007, respectively. Currently, he is a Reader in Computational Intelligence at the School of Science and Technology, Nottingham Trent University, Nottingham, U.K. From July 2011 to August 2012, he was in a Hoofddocent Tenure Track in Process Intelligence at the School of Industrial Engineering, Information Systems, Eindhoven University of Technology (TU/e), Eindhoven, The Netherlands. His research interests include: fuzzy logic and applications, evolutionary computation, ambient intelligence, forensic intelligence, reputation and trustiness in e-commerce, and so on. In this context, he designed and developed the Fuzzy Markup Language, an XML-based environment for modeling transparent fuzzy systems and he is chairing the IEEE 1855 WG, the working group devoted to make FML as the first IEEE Standard in the area of computational intelligence. Dr. Acampora served as chair and vice-chair of the IEEE CIS Standards Committee. Dr. Acampora acts as Area Chair of the IEEE International Conference on Fuzzy Systems, Chair of the Workshop on Computational Intelligence Tools at IEEE Symposium Series on Computational Intelligence 2015. He acted as Program Committee Member of several conferences in the area of robotics and artificial and computational intelligence. He is acting as General co-Chair for Fuzz-IEEE 2017.
Yuandong TianYuandong Tian is a Research Scientist in Facebook AI Research, working on Deep Learning and Computer Vision. Prior to that, he was a Software Engineer in Google Self-driving Car team in 2013-2014. He received Ph.D in Robotics Institute, Carnegie Mellon University on 2013, Bachelor and Master degree of Computer Science in Shanghai Jiao Tong University. He is the recipient of 2013 ICCV Marr Prize Honorable Mentions for his work on global optimal solution to nonconvex optimization in image alignment.
Organized by Kazuo Tanaka
Abstract
This talk presents a comprehensive treatment of system-theoretical approaches to fuzzy systems modeling and control. These approaches are enabled by a
progression of design frameworks from the well-known convex linear matrix inequality (LMI) based design to non-convex sum-of-squares (SOS) based synthesis.
Today, there exists a large body of literature on fuzzy model-based control using LMIs. A key feature of LMI-based approaches is that they result in simple,
natural and effective design procedures as alternatives or supplements to other nonlinear control techniques that require special and rather involved knowledge.
The LMI-based design approaches entail obtaining numerical solutions by convex optimization methods such as the interior point method. Though LMI-based approaches
have enjoyed great success and popularity, there still exist a large number of design problems that either cannot be represented in terms of LMIs, or the results
obtained through LMIs are sometimes conservative. A post-LMI framework is the SOS-based approaches for control of nonlinear systems using polynomial fuzzy systems
and controllers, which includes the well-known Takagi-Sugeno fuzzy systems and controllers as special cases. The SOS framework has been extensively applied to
guaranteed-cost control, observer design, discrete system stabilization, etc. To obtain a polynomial fuzzy controller by solving design conditions efficiently,
non-convex design conditions are transformed to convex design conditions. However the transformation often results in some challenging issues in SOS-based
approaches. Conversely, non-convex design conditions can avoid the transformation problems, but it is difficult to solve non-convex design conditions efficiently.
To this end, this talk presents a most recent result on an efficient numerical technique to deal with non-convex design conditions.
The research covered in this talk has been conducted in our laboratory at the University of Electro-Communications (UEC), Tokyo, Japan, in collaboration with
Prof. Hua O. Wang and his laboratory at Boston University, Boston, USA. Throughout the talk, it will be reflected upon how to bridge enabling fuzzy model-based
control frameworks with system-theoretical approaches in the development of toolkits for control of nonlinear systems.
Biography
Professor Kazuo Tanaka is currently a Professor in Department of Mechanical Engineering and Intelligent Systems at the University of
Electro-Communications, Tokyo, Japan. He received his Ph.D. in Systems Science from Tokyo Institute of Technology in 1990. He was a Visiting
Scientist in Computer Science at University of North Carolina at Chapel Hill in 1992 and 1993.
He received the Best Young Researchers Award from the Japan Society for Fuzzy Theory and Systems in 1990, the Outstanding Papers Award at the 1990 Annual
NAFIPS Meeting in Toronto, Canada, in 1990, the Outstanding Papers Award at the Joint Hungarian-Japanese Symposium on Fuzzy Systems and Applications in
Budapest, Hungary, in 1991, the Best Young Researchers Award from the Japan Society for Mechanical Engineers in 1994, the Outstanding Book Awards from
the Japan Society for Fuzzy Theory and Systems in 1995, 1999 IFAC World Congress Best Poster Paper Prize in 1999, 2000 IEEE Transactions on Fuzzy Systems
Outstanding Paper Award in 2000, the Best Paper Selection at 2005 American Control Conference in Portland, USA, in 2005, the SICE Award at RoboCup Japan
Open 2010, Osaka, Japan, in 2010, Best in class Autonomy Award at RoboCup 2011 Japan Open in Osaka, Japan, in 2011, the Best Paper Award at 2013 IEEE
International Conference on Control System, Computing and Engineering (ICCSCE 2013) in Penang, Malaysia, in 2013, the Best Paper Finalist at 2013
International Conference on Fuzzy Theory and Its Applications (iFUZZY2013) in Taipei, Taiwan, in 2013.
He served an Associate Editor for Automatica and for the IEEE Transactions on Fuzzy Systems. He served also as Chair of Task Forces on Fuzzy Control
Theory and Application, IEEE Computational Intelligence Society Fuzzy Systems Technical Committee. He is currently on the IEEE Control Systems Society
Conference Editorial Board.
His research interests include fuzzy systems control, nonlinear systems control, unmanned aerial vehicle, robotics, brain-machine interface and their
applications. He published many papers in these areas, as well as 17 books, including: Fuzzy Control Systems Design and Analysis: A Linear Matrix
Inequality Approach (Wiley-Interscience, 2001). His publications currently report over 18,000 citations according to Google Scholar Citations, with an
h-index of 46 and an i10 index of 99. He is an IEEE fellow and an IFSA fellow.
Organized by Christian Wagner, Jon Garibaldi and Robert John
Abstract
General type-2 fuzzy sets and systems are paradigms which enable fine-grained capturing, modelling and reasoning with uncertain information. While recent
years have seen increasing numbers of applications from control to intelligent agents and environmental management, the perceived complexity of general type-2
fuzzy sets and systems still makes their adoption a daunting and not time-effective proposition to the majority of researchers.
This tutorial is designed to give researchers a practical introduction to general type-2 fuzzy sets and systems. Over three hours, the modular tutorial will
address three main aspects of using and working with general type-2 fuzzy sets and systems:
Biography
Dr Christian Wagner is an Associate Professor in Computer Science at the University of Nottingham, UK. He received his PhD in Computer Science from the University of Essex in 2009 after which he was involved both in the management and scientific work of the EU FP7 project ATRACO, joining the University of Nottingham in 2011. His main research interests are centred on uncertainty handling, approximate reasoning (reasoning in the face of uncertainty, lack of knowledge and vagueness), decision support and data-driven decision making using computational intelligence techniques. Recent applications of his research have focused in particular on decision support in environmental and infrastructure planning & management contexts as well as cyber-security. He has published more than 60 peer-reviewed articles in international journals and conferences, two of which recently won best paper awards (Outstanding IEEE Transactions on Fuzzy Systems paper 2013 (for a paper in 2010) and a best paper award for a Fuzz-IEEE 2012 conference paper), and several book chapters. Dr Wagner is currently active PI and Co-I on a number of research projects, with overall funding as PI of £1 million and funding as Co-I of £2 million. He is a senior member of the IEEE, an Associate Editor of the IEEE Transactions on Fuzzy Systems journal (IF: 6.3) and is actively involved in the academic community through for example the organization of special sessions at premiere IEEE conference such as the World Congress on Computational Intelligence 2014 and the IEEE Conference on System, Man and Cybernetics 2015. He has developed and been involved in the creation of multiple open source software frameworks, making cutting edge research accessible both to peer researchers as well as to different (multidisciplinary - beyond computer science) research and practitioner communities, including R and Java based toolkits for type-2 fuzzy systems in use in more than ten countries.
Jon GaribaldiProf. Jon Garibaldi is Head of the Intelligent Modelling and Analysis (IMA) Research Group in the School of Computer Science at the University of Nottingham. His main research interest is in developing intelligent techniques to model human reasoning in uncertain environments, with a particular emphasis on the medical domain. Prof. Garibaldi has been the PI on EU and EPSRC projects worth over £3m, and CoI on a portfolio of grants worth over £25m. He is Director of the University of Nottingham Advanced Data Analysis Centre, established in 2012 to provide leading-edge data analysis services across the University and for industrial consultancy. His experience of leading large research projects includes his roles as Lead Scientist and Co-ordinator of BIOPTRAIN, a Marie-Curie Early Stage Training network in bioinformatics optimisation worth over €2m, the local co-ordinator of the €6.4m BIOPATTERN FP6 Network of Excellence, lead Computer Scientist on a £700k MRC DPFS (Developmental Pathway Funding Scheme) project to transfer the Nottingham Prognostic Index for breast cancer prognosis into clinical use. Industrial projects include a TSB funded project for data analysis in the transport sector, and a collaborative project with CESG (GCHQ) investigating and modelling variation in human reasoning in subjective risk assessments in the context of cyber-security. He is currently the local PI for Nottingham on the £900k UKCRC Joint Funders Tissue Directory and Coordination Centre, a CoI on the £14m BBSRC/EPSRC Synthetic Biology Research Centre in Sustainable Routes to Platform Chemicals, and was CoI on the £10m BBSRC/EPSRC Centre for Plant Integrative Biology. Prof. Garibaldi has published over 200 articles on fuzzy systems and intelligent data analysis, including over 50 journal papers and over 150 conference articles, three book chapters, and three co-edited books. He is an Associate Editor of Soft Computing, was Publications Chair of FUZZ-IEEE 2007 and General Chair of the 2009 UK Workshop on Computational Intelligence, and has served regularly in the organising committees and programme committees of a range of leading international conferences and workshops, such as FUZZ-IEEE, WCCI, EURO and PPSN. He is a member of the IEEE.
Robert JohnProf. Robert John is a Professor of Operational Research and Computer Science and Head of the ASAP research group. He is a senior member of IEEE, fellow of the British Computer Society and elected member of the EPSRC college. In the field of type-2 fuzzy logic, his work is widely recognised by the international fuzzy logic community as leading in the aspects of theoretical foundations, as well as practical applications. His work has produced many fundamental new results that have opened the field to new research, enabling a broadening of scope and application. He is associate editor of the journal Soft Computing and the International Journal of Information & System Sciences, and member of the editorial board of International Journal of Cognitive Neurodynamics, Grey Systems: Theory and Application, Turkish Journal of Fuzzy Systems, International Journal for Computational Intelligence and Information and System Sciences. He chaired EUSFLAT2001 organised on behalf of the European Society of Fuzzy Logic and Technology and held at De Montfort University. He has over 150 publications of which circa 50 are in international journals and many papers are very well cited.
Organized by Derek T. Anderson, Chee Seng Chan and James M. Keller
Abstract
We will discuss challenges in modern computer vision (CV) research and possible directions, tools and
novel ideas that the computational intelligence (CI) community may contribute. We will discuss difficult
problems or challenges in CV that is recognized by researchers in the areas of low-level, mid-level and
high-level CV. We review standard and modern CV approaches, discuss data sets currently used by the CV
community, and we present some CI techniques employed in each area, always with an eye towards
where soft computing can make the best impact. This event is not meant to be a survey of all techniques;
if yours is left out, please do not get mad. The intent is to provide an assessment, from our perspective,
of the power, limitations, and potential of CI algorithms, with thoughts about the challenges for those of
us in the CI family to have our technologies be better accepted by the CV community.
Below is a tentative list of topics and their subsequent organization
Part 1
Introduction:
Biography
Derek T. Anderson is an Assistant Professor in Electrical and Computer Engineering (ECE) at Mississippi State University (MSU). He received his B.S. and M.S. degrees in Computer Science and the Ph.D. in ECE. His research interests are new frontiers in data and information fusion for pattern analysis and automated decision making with an emphasis on heterogeneous uncertain information. This includes measure theory and fuzzy integrals, clustering, multi-source (sensor, algorithm and human) fusion, remote sensing and computer vision. Derek received the Best Student Paper Award at FUZZ-IEEE 2008, the Best Paper Award at FUZZ-IEEE 2012 and was a co-author of the Best Student Paper Award at SPIE in ATR in 2013. He has received funding from DARPA, Airforce Research Laboratory (AFRL), National Institute of Justice (NIJ), Leonard Wood Institute (LWI), Pacific Northwest National Laboratory (under a U.S. Department of Energy contract), Army Research Office (ARO) and NVESD (Countermine and Science and Technology) and U.S. Army ERDC. He has published 1 book chapter, 18 journal manuscript, 57 conference proceedings and he is an Associate Editor for the IEEE Transactions on Fuzzy Systems (TFS). Derek has co-chaired numerous special sessions at WCCI and IPMU. He also ran a workshop (in conjunction with Jim Keller and Tony Han) on the “View of Computer Vision Research and Challenges for the Fuzzy Set Community”, FUZZ-IEEE 2013.
Chee Seng ChanChee Seng Chan is a Senior Lecturer in the Faculty of Computer Science and Information Technology, University of Malaya, Malaysia. He received his PhD from University of Portsmouth, U.K. in 2008. His research interests spans a variety of aspects of fuzzy qualitative reasoning and computer vision; with a focus on image/video content analysis and human-robot interaction. He is the founder chair for the IEEE Computational Intelligence Society (CIS) Malaysia chapter and founder of the Malaysian Image Analysis and Machine Intelligence Association (MIAMI), a society under the International Association of Pattern Recognition (IAPR). He is/was the organizing chair for the Asian Conference on Pattern Recognition (ACPR) in 2015, general chair for the IEEE Visual Communications and Image Processing (VCIP) in 2013, and has co-chaired numerous special sessions at FUZZ-IEEE (2010-2015). Also, he has served as the guest editor in International Journal of Uncertainty, Fuzziness and Knowledge-based Systems (IJUFKS), Information Sciences (INS) and Signal, Video and Image Processing (SVIP). He is a recipient of the Hitachi Research Fellowship in 2013 and the IET (Malaysia) Young Engineer award in 2010. Finally, he is a senior member of IEEE, a chartered engineer and member of IET.
James M. KellerJames M. Keller holds the University of Missouri Curators Professorship in the Electrical and Computer Engineering and Computer Science Departments on the Columbia campus. He is also the R. L. Tatum Professor in the College of Engineering. His research interests center on computational intelligence with a focus on problems in computer vision, pattern recognition, and information fusion including bioinformatics, spatial reasoning, geospatial intelligence, landmine detection and technology for eldercare. Professor Keller has coauthored over 400 technical publications. Jim is a Life Fellow of the IEEE, an IFSA Fellow, and past President of NAFIPS. He received the 2007 Fuzzy Systems Pioneer Award and the 2010 Meritorious Service Award from the IEEE Computational Intelligence Society. He finished a full six year term as Editor-in-Chief of the IEEE Transactions on Fuzzy Systems, followed by being the Vice President for Publications of the IEEE CIS from 2005-2008, and since then an elected CIS Adcom member. He is the IEEE TAB Transactions Chair and a member of the IEEE Publication Review and Advisory Committee. Jim has had many conference positions and duties over the years.
Organized by Meng Joo Er
Abstract
It is well known fuzzy logic provides human reasoning capabilities to capture
uncertainties, which cannot be described by precise mathematical models. In essence, a
fuzzy logic system is a rule-based system, which comprises a set of linguistic rules in the
form of “ IF-THEN”. Designing a fuzzy system is a subjective approach, which is
adopted to express a designer's knowledge. As there is no formal and effective way of
knowledge acquisition, it is difficult for a designer, even he/she is a domain expert, to
examine all the input-output data from a complex system so as to find a number of
appropriate rules for the fuzzy system. In order to circumvent this problem, it is desirable
to develop an objective approach to automate the modeling process based on numerical
training data for fuzzy systems.
Neural networks offer remarkable advantages, such as adaptive learning, parallelism,
fault tolerance, and generalization. They have been proved to be powerful techniques in
the discipline of system control, especially when the controlled system is difficult to be
modeled accurately, or the controlled system has large uncertainties and strong
nonlinearities. Thus, fuzzy logic and neural networks have been widely adopted in
model-free adaptive control of nonlinear systems resulting in neural-network-based
fuzzy systems termed Fuzzy Neural Network (FNN) Systems.
Usually, the typical approaches of designing FNNs are to build standard neural
networks, which are designed to approximate a fuzzy algorithm or a process of fuzzy
inference through the structure of neural networks. These FNNs can readily solve two
problems of conventional fuzzy reasoning: 1) The lack of systematic design for
membership functions because it is basically a heuristic approach, and 2) The lack of
adaptability for possible changes in the reasoning environment. The two problems
intrinsically concern parameter estimation. Yet, in most FNNs, structure identification is
still time-consuming because the determination of hidden nodes in neural networks can
be viewed as the choice of number of fuzzy rules. On the other hand, we can see that
most of the existing FNNs are trained by the Back-Propagation (BP) algorithm. It is well
known that the BP method is generally slow and likely to become trapped in local
minima. Hence, a fast learning paradigm for real-time applications is highly desirable.
Dynamic Fuzzy Neural Networks (DFNN) are FNN systems whose structure is
evolving and/or self-organising. Some researchers call DFNN as Self-organisng FNN or
evolving FNN. The objectives of this tutorial are to review Dynamic Fuzzy Neural
Networks developed by various researchers over the last few decades. It will have a
comprehensive coverage of three aspects of DFNN, namely Architectures, Algorithms,
and Applications. There are numerous kinds of FNN proposed in the literatur and most
of them are suitable for only off-line learning. Online learning algorithms are more
attractive as they can be used for online identification and control processes in dealing
with real-world engineering problems which are nolinear, time-varying and ill-defined
dynamic systems. In this tutorial, both offline and online methods will be presented.
Structure of the Tutorial
Biography
Professor Er Meng Joo is currently a Full Professor in Electrical and
Electronic Engineering, Nanyang Technological University, Singapore.
He served as the Founding Director of Renaissance Engineering
Programme and an elected member of the NTU Advisory Board and
from 2009 to 2012. He served as a member of the NTU Senate
Steering Committee from 2010 to 2012.
He has authored five books entitled “Dynamic Fuzzy Neural Networks:
Architectures, Algorithms and Applications” and “Engineering
Mathematics with Real-World Applications” published by McGraw Hill
in 2003 and 2005 respectively, and “Theory and Novel Applications of
Machine Learning” published by In-Tech in 2009, “New Trends in Technology: Control,
Management, Computational Intelligence and Network Systems” and “New Trends in
Technology: Devices, Computer, Communication and Industrial Systems”, both published
by SCIYO, 19 book chapters and more than 500 refereed journal and conference papers
in his research areas of interest.
Professor Er was bestowed the Web of Science Top 1 % Best Cited Paper and the Elsevier
Top 20 Best Cited Paper Award in 2007 and 2008 respectively. In recognition of the
significant and impactful contributions to Singapore’s development by his research
projects, Professor Er won the Institution of Engineers, Singapore (IES) Prestigious
Engineering Achievement Award twice (2011 and 2015) and the highest honour of the IES
Innovation Challenge 2015. Under his leadership, the NTU Team emerged first runner-up
in the Freescale Technology Forum Design Challenge 2008. He is also the only dual winner
in Singapore IES Prestigious Publication Award in Application (1996) and IES Prestigious
Publication Award in Theory (2001). He received the Teacher of the Year Award for the
School of EEE in 1999, School of EEE Year 2 Teaching Excellence Award in 2008, the Most
Zealous Professor of the Year Award in 2009 and the Outstanding Mentor Award in 2014.
He also received the Best Session Presentation Award at the World Congress on
Computational Intelligence in 2006 and the Best Presentation Award at the International
Symposium on Extreme Learning Machine 2012. On top of this, he has more than 50
awards at international and local competitions.
Currently, Professor Er serves as the Editor-in-Chief of Transactions on Machine Learning
and Artificial Intelligence and the International Journal of Electrical and Electronic
Engineering and Telecommunications. He also serves an Area Editor of International
Journal of Intelligent Systems Science and an Associate Editor of 11 refereed international
journals, namely IEEE Transaction on Fuzzy Systems, IEEE Transaction on Cybernetics,
International Journal of Fuzzy Systems, Neurocomputing, ETRI Journal, Journal of
Robotics, International Journal of Applied Computational Intelligence and Soft Computing,
International Journal of Fuzzy and Uncertain Systems, International Journal of Automation
and Smart Technology, International Journal of Modelling, Simulation and Scientific
Computing, International Journal of Intelligent Information Processing and an editorial
board member of the EE Times.
Professor Er has been invited to deliver more than 60 keynote speeches and invited talks
overseas. He has also been active in professional bodies. He has served as Chairman of
IEEE Computational Intelligence Society (CIS) Singapore Chapter (2009 to 2011) and
Chairman of IES Electrical and Electronic Engineering Technical Committee (EEETC) (2004
to 2006 and 2008 to 2012). Under his leadership, the IEEE CIS Singapore Chapter won
the CIS Outstanding Chapter Award 2012 (The Singapore Chapter is the first chapter in
Asia to win the award). In recognition of his outstanding contributions to professional
bodies, he was bestowed the IEEE Outstanding Volunteer Award (Singapore Section) and
the IES Silver Medal in 2011. Due to his outstanding contributions in education, research,
administration and professional services, he is listed in Who’s Who in Engineering,
Singapore, Edition 2013.
Organized by Carlos Coello
This tutorial provides with a general picture of the current state-of-the-art in multiobjective optimization using metaheuristics. First, some historical background
is provided, dating back to the origins of multiobjective optimization in general. This discussion motivates the use of metaheuristics for solving multiobjective
problems and includes a brief description of some of the earliest approaches proposed in the literature. Then, a discussion on different heuristics used for
multiobjective optimization is provided. This discussion includes evolutionary algorithms, simulated annealing, tabu search, scatter search, the ant system, particle
swarm optimization and artificial immune systems. The tutorial finishes with a discussion of some of the research topics that seem more promising for the next few years.
Biography
Carlos Artemio Coello Coello received a PhD in Computer Science from Tulane University (USA) in 1996. He is currently full professor with distinction
at CINVESTAV-IPN in Mexico City, Mexico. He has published over 400 papers in international peer-reviewed journals, book chapters, and conferences.
He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007)
and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over
28,000 citations, according to Google Scholar (his h-index is 67). He received the "2007 National Research Award" (granted by the Mexican Academy of
Science) in the area of "exact sciences" and, since January 2011, he is an "IEEE Fellow" for "contributions to multi-objective optimization and
constraint-handling techniques."
He is also the recipient of the prestigious "2013 IEEE Kiyo Tomiyasu Award" and of the "2012 National Medal of Science and Arts" in the area
of "Physical, Mathematical and Natural Sciences" (this is the highest award that a scientist can receive in Mexico). He also serves as associate
editor of the IEEE Transactions on Evolutionary Computation, Computational Optimization and Applications, Pattern Analysis and Applications,
Journal of Heuristics, Evolutionary Computation and Applied Soft Computing. He has served as Vice-Chair and Chair of the IEEE CIS Evolutionary
Computation Technical Committee and is currently the Chair of the IEEE CIS Distinguished Lecturers Committee. He was also the General Chair
of the 2013 IEEE Congress on Evolutionary Computation, which took place in Cancún, Mexico.
Organized by Erik D. Goodman
Abstract
Many researchers have developed evolutionary algorithms that excel in solving various types of problems that would seem to make them attractive for companies to use on such problems. In experience, though, it is difficult to get the staff of a company even to entertain such an idea. This tutorial will explore some factors that make it difficult and some keys to success, in either a consulting/sponsored research relationship to industry or in founding a company to provide search/optimization products or services. Time will be provided for addressing questions from the participants. The presenter co-founded Red Cedar Technology, Inc. (one such successful company), directed a university-based design center and an industrial consortium, and did many industry-sponsored R&D projects at the university. He now directs BEACON, an NSF-sponsored Science and Technology Center that includes about 20 faculty members studying computational evolution in various forms including GA/GP/ES and digital evolution.
Biography
Erik D. Goodman is PI and Director of the BEACON Center for the Study of Evolution in Action, an NSF Science and Technology Center headquartered at Michigan State University, funded at $47.5 million for 2010-2020. BEACON now has over 600 members, including many who study evolutionary computation or digital evolution. Goodman studies application of evolutionary principles to solution of engineering design problems. He received the Ph.D. in computer and communication sciences from the University of Michigan in 1972. He joined MSU’s faculty in Electrical Engineering and Systems Science in 1971, was promoted to full professor in 1984, and also holds appointments in Mechanical Engineering and in Computer Science and Engineering, in which he has guided many Ph.D. students. He directed the Case Center for Computer-Aided Engineering and Manufacturing from 1983-2002, and MSU’s Manufacturing Research Consortium from 1993-2003. He co-founded MSU’s Genetic Algorithms Research and Applications Group (GARAGe) in 1993. In 1999, he co-founded Red Cedar Technology, Inc., which develops design optimization software, and was Vice President for Technology until BEACON was founded in 2010. The company was sold in 2013 and continues to operate. Goodman was chosen Michigan Distinguished Professor of the Year, 2009, by the Presidents Council, State Universities of Michigan. He was Chair of the Executive Board and a Senior Fellow of the International Society for Genetic and Evolutionary Computation, 2003-2005. He was founding chair of the ACM’s SIG on Genetic and Evolutionary Computation (SIGEVO) in 2005.
Organized by Thomas Stützle
Abstract
The design of algorithms for computationally hard problems is time-consuming and difficult. This is in large part due to a number of aggravating circumstances
such as the NP-hardness of most of the problems to be solved, the difficulty of algorithm analysis due to stochasticity and heuristic biases, and the large number
of degrees of freedom in defining and selecting algorithmic components and settings of numerical parameters. Even when using off-the-shelf solvers, their performance
strongly depends on the appropriate settings of a large number of parameters that can influence their search behaviour. Over the recent years, automatic algorithm
configuration methods have been developed to effectively search large and diverse parameter spaces for identifying superior algorithm designs and performance improving
parameter settings. These methods have by now proved to be instrumental for developing high-performance algorithms.
In the first part of this tutorial, I will introduce the algorithm design and tuning tasks that recent automatic algorithm configuration methods address; describe the
main existing automatic algorithm configuration techniques; and show how (easily) they can be used. In the second part of the tutorial, I will discuss various successful
applications of automatic algorithm configuration methods to configure mixed-integer programming solvers, to generate hybrid stochastic local search algorithms, to
design multi-objective optimisers, and to improve algorithm anytime behaviour. Finally, we will discuss specific aspects relevant to the application of automatic algorithm
configuration methods and argue that automatic algorithm configuration methods will transform the way algorithms for difficult problems are designed and developed in
the future.
Biography
Dr. Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications'' and ``Ant Colony Optimization'' and he has extensively published in the wider area of metaheuristics including 19 edited proceedings or books, 9 journal special issues, and more than 170 peer-reviewed articles, many of which are highly cited. He is associate editor of Applied Mathematics and Computation, Computational Intelligence, and Swarm Intelligence and on the editorial board of five other journals including Evolutionary Computation and JAIR. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to several effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on "A Racing Algorithm For Configuring Metaheuristics" (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.
Organized by Andries Engelbrecht
Abstract
The main objective of this tutorial will be to answer the question if particle swarm optimization (PSO) can be considered as a universal optimizer. In the context of this tutorial, this means that the PSO can be applied to a wide range of optimization problem types as well as search domain types. The tutorial will start with a very compact overview of the original, basic PSO. Some experience and background on PSO will be assumed. A summary of important theoretical findings about PSO, in particular particle trajectories and convergence behavior will be provided, as this will provide important insights to the remainder of the tutorial. This will be followed by a short discussion on heuristics to select proper values for control parameters. The remainder and bulk of the tutorial will cover a classification of different problem types, and will show how PSO can be applied to solve problems of these types. This part of the tutorial will be organized in the following sections, one for each problem type:
Biography
Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999
respectively. He is Professor in Computer Science at the University of Pretoria, and serves as Head of the department. He holds the position of South
African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group. His research interests include swarm
intelligence, evolutionary computation, neural networks, artificial immune systems, and the application of these paradigms to data mining, games,
bioinformatics, finance, and difficult optimization problems. He has published over 270 papers in these fields and is author of two books, Computational
Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.
Prof Engelbrecht is very active in the international community, annually serving as reviewer for over 20 journals and 10 conferences. He is an Associate
Editor of the IEEE Transactions on Evolutionary Computation, Journal of Swarm Intelligence, IEEE Transactions on Computational Intelligence and AI in
Games, and Soft Computing. He was co-guest editor of special issues of the IEEE Transactions on Evolutionary Computation and the Journal of Swarm
Intelligence. He served on the international program committee and organizing committee of a number of conferences, organized special sessions, presented
tutorials, and took part in panel discussions. He was the founding chair of the South African chapter of the IEEE Computational Intelligence Society.
He is a member of the Evolutionary Computation Technical Committee, Games Technical Committee, and the Evolutionary Computation in Dynamic and Uncertain
Environments Task Force.
Organized by Marouane Kessentini
Abstract
Software engineering is by nature a search problem to find an optimal or near-optimal solution. This search is often complex with several competing
constraints, and conflicting functional and non-functional objectives. The situation can be worse since nowadays successful software are more complex,
more critical and more dynamic leading to an increasing need to automate or semi-automate the search process of acceptable solutions for software
engineers. As a result, an emerging research area, called Search-Based Software Engineering (SBSE), is rapidly growing. SBSE is a software development
practice which focuses on couching software engineering problems as optimization problems and utilizing meta-heuristic and computational search techniques
to discover and automate the search of near optimal solutions to those problems.
SBSE has been applied to wide variety of software engineering problems covering the software life cycle activities such as testing, requirements engineering,
software management, refactoring, re-modularization, etc. While SBSE has been successfully applied to a wide variety of software engineering problems,
several challenges are still to be addressed. I will focus in this talk on covering the basic concepts related to the formulation of large scale real world
software engineering problems as search problems such as test cases generation, model transformation, code refactoring, etc. Then, I will describe several
successful SBSE projects in automotive industry and give several future research directions to handle the growing scalability issues when adapting computational
search and intelligence techniques to real world software engineering problems.
Biography
Dr. Marouane Kessentini is an Assistant Professor in the Department of Computer and Information Science at the University of Michigan. He is the founder of the Search-Based Software Engineering (SBSE) research lab including now one post-doc, six PhD students and seven master students. He has several collaborations with different industrial companies on studying software engineering problems by optimization techniques such as software quality, software testing, software migration, software evolution, etc. He also received the best dissertation award in 2012 from University of Montreal and a Presidential BSc Award from the President of Tunisia in 2007. He published more than 70 papers in software engineering conferences and journals including 3 best paper awards. He has served as program committee member in several major conferences (GECCO, MODELS, ICMT, SSBSE, etc.) and as organization member of many conferences and workshops. He wa also the co-chair of the SBSE track at the GECCO2014/2015 conferences and the general chair of of the Search Based Software Engineering Symposium (SSBSE2016). He is the founder of the North American Symposium on Search Based Software Engineering.
Organized by P. N. Suganthan and M. Z. Ali
Abstract
Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a novel concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE to improve the performance of DE on multi-modal landscapes. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.
Biography
Ponnuthurai Nagaratnam Suganthan received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to NTU in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press. He is an associate editor of the IEEE Trans on Cybernetics (2012 - ), IEEE Trans on Evolutionary Computation (2005 - ), Information Sciences (Elsevier) (2009 - ), Pattern Recognition (Elsevier) (2001 - ) and Int. J. of Swarm Intelligence Research (2009 - ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 - ), an Elsevier Journal. SaDE paper (published in April 2009) won "IEEE Trans. on Evolutionary Computation" outstanding paper award in 2012. Dr Jane Jing Liang (his former PhD student) won the IEEE CIS Outstanding PhD dissertation award, in 2014. IEEE CIS Singapore Chapter won the best chapter award in Singapore in 2014 for its achievements in 2013 under his leadership. His research interests include swarm and evolutionary algorithms, pattern recognition, numerical optimization by population-based algorithms and applications of swarm, evolutionary & machine learning algorithms. His publications have been well cited (Googlescholar Citations: 17k). His SCI indexed publications attracted over 1000 SCI citations in each calendar years 2013, 2014 and 2015. He was selected as one of the highly cited researchers by Thomson Reuters in 2015 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S'90-M'92-SM'00-F'15) since 1990 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016
Organized by Michael Epitropakis and Xiaodong Li
Abstract
Population or single solution search-based optimization algorithms (i.e. {meta,hyper}-heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are "multimodal" by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as “niching” methods. A niching method can be incorporated into a standard search-based optimization algorithm, in a sequential or parallel way, with an aim to locate multiple globally optimal or suboptimal solutions,. Sequential approaches locate optimal solutions progressively over time, while parallel approaches promote and maintain formation of multiple stable sub-populations within a single population. Many niching methods have been developed in the past, including crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization, Differential Evolution and Evolution Strategies.
In this tutorial we will aim to provide an introduction to niching methods, including its historical background, the motivation of employing niching in EAs. We will present in details a few classic niching methods, such as the fitness sharing and crowding methods. We will also provide a review on several new niching methods that have been developed in meta-heuristics such as Particle Swarm Optimization and Differential Evolution. Employing niching methods in real-world situations still face significant challenges, and this tutorial will discuss several such difficulties. In particular, niching in static and dynamic environments will be specifically addressed. Following this, we will present a suite of new niching function benchmark functions specifically designed to reflect the characteristics of these challenges. Performance metrics for comparing niching methods will be also presented and their merits and shortcomings will be discussed. Experimental results across both classic and more recently developed niching methods will be analysed based on selected performance metrics. Apart of benchmark niching test functions, several examples of applying niching methods to solving real-world optimization problems will be provided. This tutorial will use several demos to show the workings of niching methods.
Biography
Xiaodong Li received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is an Associate Professor at the School of Computer Science and Information Technology, RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, complex systems, multiobjective optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, the journal of Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member and currently a Vice-chair of the following three IEEE CIS Task Forces: Swarm Intelligence, Large Scale Global Optimization, and Multimodal Optimization. He was the General Chair of SEAL'08, a Program Co-Chair AI'09, and a Program Co-Chair for IEEE CEC’2012. He is the recipient of 2013 SIGEVO Impact Award. For further information, please visit his website.
Organized by Benjamin Doerr and Carola Doerr
Abstract
This tutorial provides a smooth introduction to the theory of evolutionary algorithms. We shall start by discussing
Biography
Benjamin Doerr
Benjamin Doerr is a full professor at the French Ecole Polytechnique.
He also is an adjunct professor at Saarland University. He received
his diploma (1998), PhD (2000) and habilitation (2005) in mathematics
from Kiel University. His research area is the theory both of
problem-specific algorithms and of randomized search heuristics like
evolutionary algorithms. Major contributions to the latter include
runtime analyses for evolutionary algorithms and ant colony
optimizers, as well as the further development of the drift analysis
method, in particular, multiplicative and adaptive drift. In the young
area of black-box complexity, he proved several of the current best
bounds.
Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded
the theory track at GECCO, served as its co-chair 2007-2009 and 2014.
He is the Hot-off-the-press chair for GECCO 2016. He is a member of
the editorial boards of "Evolutionary Computation", "Natural
Computing", "Theoretical Computer Science" and "Information Processing
Letters". Together with Anne Auger, he edited the book "Theory of
Randomized Search Heuristics". He gave tutorials on various theory
topics at GECCO, CEC, PPSN, and other venues.
Carola Doerr is a permanent CNRS researcher at the Université Pierre et Marie Curie (Paris 6).
She studied mathematics at Kiel University (Germany, Diploma in 2007) and
computer science at the Max Planck Institute for Informatics and
Saarland University (Germany, PhD in 2011). From Dec. 2007 to
Nov. 2009, Carola Doerr has worked as a business consultant for
McKinsey & Company, mainly in the area of network optimization,
where she has used randomized search heuristics to compute more
efficient network layouts and schedules.
Before joining the CNRS she was a post-doc at the
Université Diderot (Paris 7) and the Max Planck Institute
for Informatics.
Carola's main research interest is in the theory of randomized
algorithms, both in the analysis of existing algorithms
as well as in the design of novel algorithmic approaches.
She has published several papers about
black-box complexity and has contributed to the field of
evolutionary computation also through results on the runtime analysis
of evolutionary algorithms and drift analysis, as well as through the
development of search heuristics for solving geometric discrepancy problems.
Carola has been chairing the theory track at GECCO in 2015 (together with Francisco Chicano)
and she serves as tutorial chair of PPSN 2016 (together with Nicolas Bredeche).
Since 2014 she is also involved in the organization of the women@GECCO workshop, where she is chairing the organization committee in 2016.
She has been a tutorial speaker at GECCO 2013, 2014, and 2016.
The IEEE WCCI 2016 will feature workshops to be held in conjunction with the main event. The overall purpose of a workshop is to provide participants with the opportunity to present and discuss novel research ideas on active and emerging topics of Computational Intelligence.
IEEE WCCI 2016 Workshops
Organized by Stefano Squartini, Derong Liu, Francesco Piazza, Dongbin Zhao and Haibo He
Workshop-01A - Monday, 25 July 2016, 8:00 - 10:00am, Room: 208-209
Workshop-01B - Tuesday, 26 July, 4:30 - 6:30pm, Room: 208-209
The sustainable usage of energy resources is actually an issue that humanity and technology have been seriously facing in the
last decade, as a consequence of the higher and higher energy demand worldwide and the strong dependence on oil-based fuels. This
shoved the scientists and technicians worldwide to intensify their studies on renewable energy resources, especially in the Electrical
Energy sector. At the same time, a remarkable increment of the complexity of the electrical grid has been also registered at diverse levels
in order to include variegated and distributed generation and storage sites, resulting in strong engineering challenges in terms of energy
distribution, management and system maintenance. This yielded in a flourishing scientific literature on sophisticated algorithms and systems
aimed at introducing intelligence within the electrical energy grid with several effective solutions already available in the market. These
efforts have also recently cross-fertilized both research and development of commercial products for other grid types, as the smart water and
natural gas grids, which have been registering an increasing interest in the last five years.
The many different needs coming from heterogeneous grid customers, at diverse grid level, and the different peculiarities of energy sources to
be included in the grid itself, makes the task challenging and multi-faceted. Along this same direction, a big variety of interventions can be
applied into the grid to increase the inherent degree of automation, optimal functioning, security and reliability, thus increasing the engineering
appeal of the issue. A multi-disciplinary coordinated action is therefore required to the scientific communities operating in the Electrical and
Electronic Engineering, Computational Intelligence, Digital Signal Processing and Telecommunications research fields to provide adequate technological
solutions, having in mind the more and more stringent constraints in terms of environmental sustainability.
Focalizing to the interests of our scientific community, the organizers of this Workshop wants to explore the new frontiers and challenges within the
Computational Intelligence research area, including Neural Networks based solutions, for the optimal usage and management of energy resources in Smart
Grid scenarios. Indeed, the recent adoption of distributed sensor networks in many grid contexts enabled the availability of data to be used to develop
suitable expert systems with the aim of supporting the humans in dealing with the complex problems in grid management, from multiple applicative
perspectives. Related research is undoubtedly already florid, but many open issues need to be studied and innovative intelligent systems investigated.
By moving from the success obtained by the CEMiSG2014 Workshop organized within the IJCNN2014 conference in Beijing (China) and by the CEMiSG2015 Workshop
organized within the IJCNN2015 conference in Killarney (Ireland), the third edition of the CEMiSG Workshop is still targeted to propose a proficient
discussion table for scientists joining the IJCNN2016 conference at the WCCI2016.
Topics
Workshop topics include, but are not limited to:
Organized by Huajin Tang, Gang Pan, Arindam Basu and Luping Shi
Workshop-02 - Tuesday, 26 July, 8:00 - 10:00am, Room: 208-209
Emulating brain-like learning performance has been a key challenge for research in neural networks and learning systems, including recognition,
memory and perception. In the last few decades, a wealth of machine learning approaches have been proposed including sparse representations,
hierarchical and deep learning neural networks. While achieving impressive performance these methods still compare poorly to biological systems
and the problem of reducing the amount of human supervision and computations needed for learning remains a challenge.
On the other hand, the development of novel data representation and learning approaches from recent advances in neuromorphic systems have shown
appealing computational advantages. For example, using neural coding theory to represent the external sensory data, and developing spiking timing
based learning algorithm have achieved real-time learning performance, either in neuromorphic computational models or hardware systems. Attributed
to the new visual or auditory sensors, neuromorphic hardware has provided a fundamentally different technique for data representation, i.e., asynchronous
events rather than frames of images as in main stream recognition algorithms. However, the current neuromorphic information processing algorithms are
not comparable to achieve sophisticated features and power learning performance as what machine learning approaches can offer. One promising method is
to develop integrated learning models that apply brain-like data presentation and learning mechanisms, e.g., implementing deep learning in neuromorphic
systems. Neuromorphic systems also overlap with another framework called cyborg intelligence, combining brain functions with computational machines to
achieve the best of both via brain-machine interface. The workshop will target the challenging problems in these areas by reporting new solutions,
theoretical and technical advances in neuromorphic computing and cyborg intelligence from the worldwide researchers and engineers.
Relevant Topics
Workshop topics include, but are not limited to:
Organized by Yun Li, Cesare Alippi, Thomas Bäck, Piero Bonissone, Stefano Cagnoni, Carlos Coello Coello, Oscar Cordón, Kalyanmoy Deb, David Fogel, Marouane Kessentini, Yuhui Shi, Xin Yao and Mengjie Zhang
Workshop-03A - Wednesday, 27 July 2016, 2:30 - 4:30pm, Room: 208-209
Workshop-03B - Wednesday, 27 July 2016, 4:30 - 6:30pm, Room: 208-209
Since the first WCCI taking place in Orlando in 1994, this Congress series and the Evolutionary Computation community have progressed tremendously. A number of CEC Panel Sessions were held and explored future directions of Evolutionary Computation. As part of the forthcoming WCCI, CEC 2016 in Vancouver promises c.50 Special Sessions, covering comprehensive activities. A Workshop to explore "Key Challenges and Future Directions of Evolutionary Computation" and to reach consensus among academia and industry is therefore timely. This format (instead of a discussion-only forum or panel session) will allow position papers that are submitted, peer reviewed and duly accepted to be recorded in the CEC Workshop Proceedings for future references.
Following individual presentations of accepted position papers, small breakout sessions will be held in parallel to explore deeper and broader views. Then an open panel discussion will proceed for convergence among academia and industry. It is intended that a short summary will be written up later for IEEE Computational Intelligence Magazine as a separate article for the CIS community and beyond.
Relevant Topics
You are warmly invited to submit a position paper with rationale, rigour and supporting evidence on one or more of the following aspects
Technical Requirements
You position paper should be academic and should normally contain
Organized by Giovanni Montana, Carlo Francesco Morabito and Roberto Tagliaferri
Workshop-02 - Tuesday, 26 July, 8:00 - 10:00am, Room: 208-209
Over the years, huge quantities of data have been generated by large-scale scientific experiments (biomedical, “omic”, imaging, astronomical, etc.), big industrial companies and on the web. One of the characteristics of such Big Data is that it often includes multiple “views” of the underlying objects. For instance, in biomedical research, various “omics” technologies (e.g. mRNA, miRNA etc.) or imaging technologies (MRI, CT, PET, etc.) can generate multiple measurements of the same biological samples.
In order to exploit this level of complexity, new machine learning and computational intelligence methodologies such as neural networks and graph mining, amongst others, have been proposed to analyze and/or visualize these multi-facets datasets in an attempt to better exploit the information they contain.
The aim of the workshop is to solicit new approaches to real world scientific and industrial big data integration.
Relevant Topics
Papers must present original work or review the state-of-the-art in the following non-exhaustive list of topics:
Panel sessions bring together the perspectives of many panelists into a cohesive conversation of innovative ideas. These panel sessions will consist of presentations/open discussions by panelists who will directly engage with the conference audience. These sessions add an enriching dimension to the conference experience and a welcome networking alternative to traditional paper presentations, which dominate some conferences.
IEEE WCCI 2016 is pleased to confirm a Panel Session on “Turning Big Data Challenges into Opportunities”
Date: 25 July 2016 (Monday)
Time: 2:30 - 4:30 pm
Venue: Ballroom B
Xin Yao (Moderator)
2014-2015 IEEE CIS President
Kathy Grise (Moderator)
IEEE Future Directions Program Director
Andrew Feng
Yahoo
Dimitar Filev
Ford
Danil Prokhorov
Toyota
Yuandong Tian
Facebook
Piero Bonissone
Piero P Bonissone Analytics LLC
IEEE WCCI 2016 is pleased to confirm a Panel Session on “How to Publish Your Research Papers in IEEE CIS Transactions?”
Date: 26 July 2016 (Tuesday)
Time: 2:30 - 4:30 pm
Venue: Ballroom B
Haibo He
Editor-in-Chief, IEEE Transactions on Neural Networks and Learning Systems
Chin-Teng Lin
Editor-in-Chief, IEEE Transactions on Fuzzy Systems
Kay Chen Tan
Editor-in-Chief, IEEE Transactions on Evolutionary Computation
Graham Kendall
Editor-in-Chief, IEEE Transactions on Computational Intelligence and AI in Games
Yaochu Jin
Editor-in-Chief, IEEE Transactions on Cognitive and Developmental Systems
Hisao Ishibuchi
Editor-in-Chief, IEEE Computational Intelligence Magazine
Jacek M. Zurada
IEEE Life Fellow, Past IEEE VP-Technical Activities
Nikhil R. Pal (Moderator)
IEEE CIS Vice-President for Publications
IEEE WCCI 2016 is pleased to confirm a Panel Session on “IEEE and CIS in the Next Decade”
Date: 27 July 2016 (Wednesday)
Time: 2:30 - 4:30 pm
Venue: Ballroom B
Karen Bartleson
2016 IEEE President-elect
Vincenzo Piuri
2015 IEEE VP - Technical Activities
Jacek M. Zurada
2014 IEEE VP - Technical Activities
Xin Yao
2014-2015 IEEE CIS President
Marios M. Polycarpou
2012-2013 IEEE CIS President
Pablo A. Estévez
Moderator, 2016-2017 IEEE CIS President
IEEE WCCI 2016 is pleased to confirm a Panel Session on “The IEEE CIS History Panel: Yesterday, Today, and Tomorrow”
Date: 26 July 2016 (Tuesday)
Time: 4:30 - 6:30 pm
Venue: Ballroom B
Piero Bonissone (Moderator)
President of NNS, 2002
James Bezdek
President of NNS, 1997 - 98
Russell Eberhart
President of NNS, 1992 - 93
Vincenzo Piuri
President of CIS, 2006 - 07
Rudolf Seising
Historian of Fuzzy Logic and CIS
Jacek M. Zurada
President of CIS, 2004 - 05
A memorial session will be organized at IEEE WCCI 2016 to commemorate the lifetime achievements of David Casasent (1942-2015).
This session addresses issues of advanced image processing, optical image processing, and pattern recognition, with an emphasis of recent advances in Big Data.
The session includes invited talk and a panel covering these topics in the light of the work of David Casasent.
The featured speaker for the Dave Casasent memorial session will be Prof. Ashit Talukder from the University of North Carolina at Charlotte.
Date: 27 July 2016 (Wednesday)
Time: 4:30 - 5:25 pm
Venue: Ballroom B
Ali Minai (Moderator)
Hava Siegelmann (Moderator)
Robert Kozma (Moderator)
A memorial session will be organized at IEEE WCCI 2016 to commemorate the lifetime achievements of Walter J. Freeman (1927-2016).
Computational neurodynamics is a field which has been significantly defined and shaped by the pioneering work of Walter J. Freeman (1927-2016).
His pivotal contributions include the discovery and modeling of the nonlinear dynamical oscillations utilized by vertebrate brains to create perception.
Recent experimental and theoretical advances in these areas will be discussed in this session in a talk and by panelists.
Date: 27 July 2016 (Wednesday)
Time: 5:35 - 6:30 pm
Venue: Ballroom B
Ali Minai (Moderator)
Hava Siegelmann (Moderator)
Robert Kozma (Moderator)
The IEEE WCCI 2016 will feature competitions to be held in conjunction with the main event. The overall purpose of a workshop is to provide participants with the opportunity to present and discuss novel research ideas on active and emerging topics of Computational Intelligence.
Prospective competition organizers are invited to submit their proposals to the Competitions Chair, Dr. Chang-Shing Lee and Dr. Simon M. Lucas by 15 December 2015. A common website for all accepted competitions will be provided on the congress website at www.wcci2016.org, for which a link to the homepage of each competition maintained by their organizers independently will be provided.
IEEE WCCI 2016 Competitions
Organized by P N Suganthan, Mostafa Z. Ali, Qin Chen, J. J. Liang and B. Y. Qu
Competition goals
The goals are to evaluate the current state of the art in single objective optimization with bound constraints and to propose novel benchmark problems with diverse characteristics. The algorithms will be evaluated with very small number of function evaluations to large number of function evaluations as well as single solution to multiple solutions. Under the above scenarios, novel problems will be designed for the first time to emulate real-world problem solving. In particular the following cases would also be considered:
Contributions to the Evolutionary Computation Community
Single objective numerical optimization is the most important class of problems. All new evolutionary and swarm algorithms are tested on single objective benchmark problems. In addition, these single objective benchmark problems can be transformed into dynamic, niching composition, computationally expensive and many other classes of problems.
How to submit an entry and how to evaluate them
Potential authors are asked to make use of the software of benchmark problems (in Matlab, C or Java) to be distributed from the competitions web pages to test their algorithms either with or without surrogate methods. The authors have to execute their novel or existing algorithms on the given benchmark problems and present the results in various formats as outlined in the technical report. The evaluation criteria will also be specified in the technical report. The authors are asked to prepare a conference paper detailing the algorithms used and the results obtained on the given benchmark problems and submit their papers to the associated special session within CEC 2016. The authors presenting the best results should also be willing to release their software for verification before declaring the eventual winners of the competition.
Special Session Associated With This Competition
This competition requires all entries to have an associated conference paper submitted. We also expect at least one author of each entry to register, attend the conference and present their paper(s).
Organized by Chang-Shing Lee, Shi-Jim Yen, I-Chen Wu and Hsin-Hung Chou
Competition goals
In order to enhance the fun in playing Go between humans and computer Go programs and to stimulate the development and researches of computer Go programs, we submit this proposal and hope to have a chance to hold Human vs. Computer Go Competition @ IEEE WCCI 2016, Vancouver Convention Centre, Vancouver, Canada. The level of computer Go program in 19x19 approaches one human with 6D before Google AlphaGo came out in Oct. 2015. In addition, the rank of Google AlphaGo with more than 1000 CPUs and 100 GPUs is 9P in Mar. 2016. Hence, we will focus on human vs. computer Go competition under small computational hardware resource. The objective of the competition is to highlight an ongoing research on Computational Intelligence approaches as well as their future applications on game domains.
Expected Humans
Expected Computer Go Programs
Organized by Hugo Jair Escalante, ChaLearn and Codalab
Abstract
Enter the final rounds of the AutoML challenge, with prizes donated by Microsoft and NVIDIA: create a fully automatic learning machine capable of solving classification and regression tasks without any human intervention. We launched AutoML in 2015 as part of the IJCNN 2015 competition program. Rounds 0 through 2 have been completed and the challenge presently entered round 3 (advanced). The challenge will terminate in March 2016. New this year: we added a GPU track sponsored by NVIDIA, which should reinforce the possibilities for deep learning methods (a.k.a. neural networks) to contribute. Winning is not as hard as you may think: nobody beat the baseline method using Naive Bayes in the last AutoML phase! Simple methods are sometimes best.
Organized by Martin Štěpnička and Michal Burda
Competition Method
The prediction competition is open to all methods of computational intelligence, incl. fuzzy method, artificial neural networks, evolutionary algorithms, decision & regression tress, support vector machines, hybrid approaches etc. used in all areas of forecasting, prediction & time series analysis, etc. Ensemble techniques are also allowed, if they employ any CI method. The contestants will use a unique consistent methodology for all the time series. The only evaluation criterion is the Symmetric Mean Absolute Percentage Error (SMAPE). The results will be uncovered during the IEEE WCCI 2016.
Organized by Diego Perez and Simon Lucas
Abstract
For WCCI 2016, we propose to run a two-player planning version (track – we call each main variant of GVG-AI a track). For this track, each player is given access to a forward model of the game, but do not know in advance what game is being played, nor of course what the opponent will do. This is a logical next step in the challenge, and we expect it to create a good deal of interest (and we will keep the sponsorship of Google DeepMind).
Organized by Alexander Lavin and Subutai Ahmad
Competition Goal
Much of the world’s data is streaming, time-series data, where anomalies give significant information in often-critical situations. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, not batches, and learn while simultaneously making predictions. The Numenta Anomaly Benchmark (NAB) attempts to provide a controlled and repeatable environment of open-source tools to test and measure anomaly detection algorithms on streaming data.
The NAB competition will distribute awards for two distinct components:
Organized by Michael G. Epitropakis, Xiaodong Li and Andries Engelbrecht
Competition Aim
The aim of the competition is to provide a common platform that encourages fair and easy comparisons across different niching algorithms. The competition allows participants to run their own niching algorithms on 20 challenging benchmark multimodal functions with different characteristics and levels of difficulty. Researchers are welcome to evaluate their niching algorithms using this benchmark suite, and report the results by submitting a paper to the associated niching special session (i.e., submitting via the online submission system of CEC'2016).
Organized by Chang-Shing Lee, Giovanni Acampora,, Ryosuke Saga, Marek Reformat and Hsin-Hung Chou
Competition Topic
“Who will like your article that you posted on Facebook?” Please design a Fuzzy Markup Language (FML) system to predict how many likes in your posted article within one to three weeks. Competitors have to describe which variables are involved in the knowledge base (KB) of FML system. Competitors can use an expert-based or a machine learning approach to identify the rule base.
Competition Method
Competition will be done before the conference. We will release the Java-based FML tool and call for applications to construct the knowledge base and rule base of FML. They should construct the FML system and write system description document with 2 or 3 pages. The competition will be held on the Internet. The winners can be invited to present the FML system at the IEEE WCCI 2016.
Details about the competition can be found here.
Organized by Hui Li, Qingfu Zhang, P. N. Suganthan, Aimin Zhou, Kalyanmoy Deb, Hisao Ishibuchi and Carlos Coello Coello
Over the past thirty years, multiobjective evolutionary algorithms (MOEAs) have become the prominent methodologies for solving multiobjective optimization problems (MOPs). The major strength of MOEAs is that they are able to find an approximation of the whole Pareto front (PF) in a single run. Fitness assignment and diversity maintenance are two major research issues in MOEAs. To design an efficient MOEA, the balance between them must be carefully considered. It should be mentioned that none of the MOEAs can have good performance on all MOPs. Therefore, it is quite important to investigate the suitability of MOEAs on the problem difficulties. During the past a few years, the following problem features that can challenge MOEAs in convergence or diversity have attracted much attention:
Organized by Bart Thomee, Pierre Garrigues, Liangliang Cao and David A. Shamma
Challenge overview
The members of the Flickr community manually tag photos with the goal of making them searchable and
discoverable. With the advent of mobile phone cameras and auto-uploaders, photo uploads have become more
numerous and asynchronous, and manual tagging is cumbersome for most users. Progress has been largely
driven by training deep neural networks on datasets, such as ImageNet, that were built by manual annotators.
However, acquiring annotations is expensive. In addition, the different categories of annotations are defined
by researchers and not by users, which means they are not necessarily relevant to users’ interests, and cannot
be directly leveraged to enable search and discovery.
We believe our proposed challenge fills a void that is not currently addressed by existing challenges. Our
challenge is uniquely aligned with the context of computational intelligence and machine learning: (1) our
dataset contains on the order of 100 million photos, which reflects well the challenges of understanding
multimedia at large scale, and (2) our benchmark focuses on user-generated content, where a large vocabulary
of concepts is collected from tags annotated by users. In contrast, the ImageCLEF annotation task focuses on
the occurrence of 251 English words that appear on web pages instead of being directly associated with images,
while the ImageNet challenge considers synsets from the WordNet dictionary as annotations, and many of
them do not appear in real-world images like those uploaded to Flickr.
Our challenge focuses on replicating how people annotate photos, rather than just focusing on photo
annotation without the human component. Our challenge asks participants to build image analysis systems
that think like humans:the correct annotation for an image isn’t necessarily the “true label”. For example,
while a photo containing an apple, a banana and a pear could be annotated using these three words, a person
more likely would annotate the image with the single word “fruit”.
As the problem of automatic image annotation is not close to being solved, we intend to hold our grand
challenge during multiple years. Depending on the progress of the submissions and the state of the art, the
difficulty of the challenge could increase.
Apart from attending the technical programs, participants are also invited to attend various social events to be held during IEEE WCCI 2016, such as Welcome Reception, Award Banquet, Women in Computational Intelligence (WCI) Reception, Student Activities and Young Professionals Reception, IEEE CIS Chapters Forum, etc.
The Women in Computational Intelligence (WCI) Committee invites all the ladies to attend the WCI-Reception at Room 114-115, Vancouver Convention Centre
(West Building) on Thursday evening (7:00 - 8:30 pm), 28 July 2016.
It is one of our traditions to organize receptions for women at the IEEE Computational Intelligence Society (CIS) conferences.
IEEE CIS is fully committed to ensure equal opportunities to both genders in the society's life and the computational intelligence arena and the WCI-committee develops, promotes and runs activities directed to achieve this goal.
Help us in building a strong WCI-Community!
Sanaz Mostaghim
Chair of WCI-Committee 2016
The IEEE Computational Intelligence Student Activities and Young Professionals sub-committees invites all students, young professionals, and their supervisors and mentors to attend a reception at Room 301-305, Vancouver Convention Centre (West Building) on Monday evening (7:30 - 9:00 pm), 25 July 2016.
The reception will be opened by Professor Pablo A. Estevez, President of the IEEE Computational Intelligence Society (CIS). This will be followed by a short presentation about opportunities and activities for students and young professionals within the CIS, and then a social networking activity (speed-dating).
Come along and network, meeting old friends and new ones!
Dipti Srinivasan
IEEE WCCI 2016 Student Activities Chair
Keeley Crockett
Chair of IEEE Computational Intelligence Student Activities
Albert Lam
Chair of IEEE Computational Intelligence Young Professionals
The IEEE CIS has more than 100 Chapters and Student Chapters, composed of active volunteers who add value to the membership of our members by organizing events, lectures, summer schools, workshops, competitions as well as networking and social events.
We would like to invite all IEEE CIS Chapter Chairs/Officer volunteers, including Student Chapters officers, to the 2016 IEEE CIS Chapters Forum which will be organized as part of IEEE WCCI 2016 at Room 306, Vancouver Convention Centre (West Building) on Monday evening (7:30 - 9:00 pm), 25 July 2016. Members who are interested in setting up new IEEE CIS Chapters are also welcome to join the Forum.
During the Forum, we will discuss how to make the most out of the initiatives organized by the IEEE, how to receive funding to organize various events and to be informed on best practices.
As the number of places is limited, please send us an email expressing your interest in joining the Forum: demetrios.g.eliades@ieee.org
Demetrios Eliades
Chair of IEEE CIS Chapters Subcommittee