Monday, 25 July 2016 (7:00 - 7:30 pm), Ballroom B
The real potential of artificial intelligence comes in having machines learn to solve problems without being told how to solve them. Researchers have been exploring the potential for machine learning for many decades, and much of that research comes in the area of games. In this public lecture, I’ll provide a little history on some of the earliest efforts to get machines to solve problems on their own, and then describe some fun experiments that I led to get computers to learn to play checkers and chess. We’ll also show some animations of characters in video games that learn to improve their behavior over time.
Dr. David Fogel is President of Natural Selection, Inc., an internationally recognized award-winning company with a 23-year history of solving challenging engineering problems. Dr. Fogel is a Fellow of the IEEE, the world’s largest organization of professional engineers. He received the Ph.D. in engineering from UC San Diego and has an honorary doctorate from the University of Pretoria, South Africa. Dr. Fogel has published 7 books and over 200 publications in technical journals, conferences, and other venues. He’s a recipient of several awards, including an IEEE Technical Field Award and the CajAstur Prize for Soft Computing. Dr. Fogel is also co-inventor of EffectCheck®, a sentiment analysis system that relies on a 50,000-word emotional thesaurus and hold 12 patents.
Monday, 25 July 2016 (10:30 - 11:30 am), Ballroom A+B+C
Machine learning technology allows us to automate many tasks that humans learn to perform through experience rather than through step-by-step instruction. Without such a learning capability we are limited to tasks for which a step-by-step sequence of instructions is known. In this talk we shall ask whether it is possible to circumscribe the set of tasks that we can expect to effectively automate through learning. The discussion will start from the position that all the information that resides in living organisms was initially acquired either through learning by an individual or through evolution. It then should follow that any unified theory of evolution and learning will be able to characterize the capabilities that humans and other living organisms can potentially acquire and perform. These tasks then comprise justifiable targets for automation. We shall discuss where we are with such a theory.
Leslie Valiant was educated at King's College, Cambridge; Imperial College, London; and at Warwick University where he received his Ph.D. in computer science in 1974. He is currently T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, where he has taught since 1982. Before coming to Harvard he had taught at Carnegie Mellon University, Leeds University, and the University of Edinburgh.
His work has ranged over several areas of theoretical computer science, particularly complexity theory, learning, and parallel computation. He also has interests in computational neuroscience, evolution and artificial intelligence and is the author of two books, Circuits of the Mind, and Probably Approximately Correct.
He received the Nevanlinna Prize at the International Congress of Mathematicians in 1986, the Knuth Award in 1997, the European Association for Theoretical Computer Science EATCS Award in 2008, and the 2010 A. M. Turing Award. He is a Fellow of the Royal Society (London) and a member of the National Academy of Sciences (USA).
Tuesday, 26 July 2016 (10:30 - 11:30 am), Ballroom A+B+C
The multiplicity of objectives in most cases of human decision-making requires that we use multi-criteria decision functions in many tasks. These functions play a fundamental role in diverse areas such as information retrieval, pattern recognition, medical diagnosis, information fusion and target recognition. Central to the construction of multi-criteria decision functions is the modeling of the appropriate relationship between the individual component criteria involved in the decision function. In many situations human beings are able to linguistically express the appropriate relationship the component criteria. Because of its ability to provide a bridge between linguistic expression and mathematical modeling fuzzy sets technology provides an idea framework for the construction of multi-criteria decision functions. In this talk we shall describe a number of aggregation operators associated with fuzzy set theory and see how they can be used to formulate multi-criteria decision functions. Particular attention will be paid to formulating multi-criteria functions from linguistically specified user requirements.
Ronald R. Yager has worked in the area of computational intelligence for over twenty-five years. He is Director of the Machine Intelligence Institute and Professor of Information Systems at Iona College. He is editor and chief of the International Journal of Intelligent Systems. He has published over 500 papers and edited over 30 books in areas related to fuzzy sets, human behavioral modeling, decision-making under uncertainty and the fusion of information. He is among the world’s top 1% most highly cited researchers with over 45,000 citations in Google Scholar. He was the recipient of the IEEE Computational Intelligence Society Pioneer award in Fuzzy Systems. He received the special honorary medal of the 50-th Anniversary of the Polish Academy of Sciences. He received the Lifetime Outstanding Achievement Award from International the Fuzzy Systems Association. He recently received honorary doctorate degrees, honoris causa, from the Azerbaijan Technical University and the State University of Information Technologies, Sofia Bulgaria. Dr. Yager is a fellow of the IEEE, the New York Academy of Sciences and the Fuzzy Systems Association. He has served at the National Science Foundation as program director in the Information Sciences program. He was a NASA/Stanford visiting fellow and a research associate at the University of California, Berkeley. He has been a lecturer at NATO Advanced Study Institutes. He was a program director at the National Science Foundation. He is a visiting distinguished scientist at King Saud University, Riyadh Saudi Arabia. He is an adjunct professor at Aalborg University in Denmark. He received his undergraduate degree from the City College of New York and his Ph. D. from the Polytechnic Institute New York University. He is the 2016 recipient of the IEEE Frank Rosenblatt Award the most prestigious honor given out by the IEEE Computational Intelligent Society.
Wednesday, 27 July 2016 (10:30 - 11:30 am), Ballroom A+B+C
Swarm robotics studies how to design and implement groups of robots that operate without relying on any external infrastructure or on any form of centralized control. Such robot swarms exploit self-organization to perform tasks that require cooperation between the robots. In the last ten years there has been a lot of progress in swarm robotics research. Quite complex robot swarms have been demonstrated in a number of different case studies. However, this progress has also led to the identification of some problems that might hinder further development. In the talk, I will discuss what I consider to be the main current issues in swarm robotics research: the manageability, fault tolerance and trust issues. I will then propose a few novel research directions that could allow us to successfully address these issues and therefore take robot swarms one step closer to real world deployment.
Marco Dorigo received his PhD degree in electronic engineering in 1992 from Politecnico di Milano, Milan, Italy. In 1995 he received the title
of Agrégé de l'Enseignement Supérieur (professorship qualification) from the Université Libre de Bruxelles, Brussels, Belgium.
From 1992 to 1993, he was a Research Fellow at the International Computer Science Institute, Berkeley, CA. In 1993, he was a NATO-CNR Fellow, and from 1994 to 1996, a Marie Curie Fellow. Since 1996, he has been a tenured researcher of the FNRS, the Belgian National Funds for Scientific Research, and a co-director of IRIDIA, the artificial intelligence laboratory of the Université Libre de Bruxelles.
Prof. Dorigo is the inventor of the ant colony optimization metaheuristic. His current research interests include swarm intelligence, swarm robotics, and metaheuristics for discrete optimization. At IRIDIA he leads a group of approximately twenty researchers who investigate various aspects of swarm intelligence and of its application to robotics, networks and optimization problems.
He is the Editor-in-Chief of Swarm Intelligence, an Associate Editor of the IEEE Transactions on Evolutionary Computation, of the IEEE Transactions on Cybernetics, of the IEEE Transactions on Autonomous Mental Development, of the ACM Transactions on Adaptive and Autonomous Systems, and a member of the Editorial Board of many journals on computational intelligence and adaptive systems.
As a result of his numerous scientific contributions, Prof. Dorigo was awarded numerous international prizes, among which the Italian Prize for Artificial Intelligence in 1996, the Marie Curie Excellence Award in 2003, the Dr. A. De Leeuw-Damry-Bourlart Award in Applied Sciences in 2005, the Cajastur International Prize for Soft Computing in 2007, an European Research Council Advanced Grant in 2010, the IEEE Frank Rosenblatt Award in 2015, and the IEEE CIS Evolutionary Computation Pioneer award in 2016.
Prof. Dorigo is a fellow of the Institute of Electrical and Electronics Engineers (IEEE), of the Association for the Advancement of Artificial Intelligence (AAAI), and of the European Coordinating Committee for Artificial Intelligence (ECCAI).
Thursday, 28 July 2016 (10:30 - 11:30 am), Ballroom A+B+C
The Web is the largest public big data repository that humankind has created. In this overwhelming data ocean we need to be aware of the quality and in particular, of biases that exist in this data, such as redundancy, spam, etc. These biases affect the machine learning algorithms that we design to improve the user experience. This problem is further exacerbated by biases that are added by these algorithms, specially in the context of recommendation systems. We give several examples and their relation to sparsity, novelty, and privacy, stressing the importance of the user context to avoid these biases.
Ricardo Baeza-Yates areas of expertise are information retrieval, web search and data mining, data science and algorithms. He was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from January 2006 to February 2016. He is part time Professor at DTIC of the Universitat Pompeu Fabra, in Barcelona, Spain, as well at the DCC of The University of Chile. Until 2004 he was Professor and founding director of the Center for Web Research at the Dept. of Computing Science of the University of Chile. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions.
Friday, 29 July 2016 (10:30 - 11:30 am), Ballroom A+B+C
Today quantitative social science is dominated by analytic methods that are heavily slanted toward “variables.” The key focus of analysis is the assessment of the relative importance of "independent" variables on a dependent variable, and researchers view their central task as estimating "net effects." Many scholars find the dominance of variable-oriented approaches deplorable and argue that the proper remedy is to drop the variable altogether from the lexicon of social research. I argue, however, that the notion of the variable should be reformulated in ways that enhance the interplay and integration of cross-case and within-case analysis. Central to this reformulation is set-theoretic methods such as truth table analysis. I show that set-theoretic methods not only provide a better way for researchers to study "connections" between aspects of cases, it also offers a better bridge to conceptual discourse. I argue further that the extensions and elaborations of set-theoretic methods that are afforded by the use of fuzzy sets are especially valuable for social research.
Charles Ragin is Chancellor’s Professor of Sociology at the University of California, Irvine. His main research interests are methodology, political sociology, and comparative-historical research. His books include Handbook of Case-Based Methods (Sage, with David Byrne), Configurational Comparative Methods: Qualitative Comparative Analysis and Related Techniques (Sage, with Benoit Rihoux), Redesigning Social Inquiry: Fuzzy Sets and Beyond (University of Chicago Press), Fuzzy-Set Social Science (University of Chicago Press), The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies (University of California Press), Issues and Alternatives in Comparative Social Research (E.J. Brill), What is a Case? Exploring the Foundations of Social Research (Cambridge University Press, with Howard S. Becker), and Constructing Social Research: the Unity and Diversity of Method (Pine Forge Press; second edition with Lisa Amoroso). He is the author of more than 150 articles in research journals and edited books, and he has developed software packages for set-theoretic analysis of social data, Qualitative Comparative Analysis (QCA) and Fuzzy-Set/Qualitative Comparative Analysis (fsQCA). He has been awarded the Stein Rokkan Prize by the International Social Science Council, the Donald Campbell Award for Methodological Innovation by the Policy Studies Organization, and the Paul Lazarsfeld Award of the American Sociological Association (Methodology Section). He has conducted academic workshops on comparative methodology and set-theoretic methods in Austria, Belgium, Canada, Denmark, France, Germany, Italy, Japan, the Netherlands, Norway, Switzerland, South Korea, Taiwan, the United Kingdom, and for diverse audiences in the United States.
Monday, 25 July 2016 (1:00 - 2:00 pm), Ballroom A
In recent years, our deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. They are now widely used in industry. I will briefly review deep supervised / unsupervised / reinforcement learning, and discuss the latest state of the art results in numerous applications.
Since age 15 or so, the main scientific ambition of professor Jürgen Schmidhuber (pronounce: You_again Shmidhoobuh) has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. He has pioneered self-improving general problem solvers since 1987, and Deep Learning Neural Networks (NNs) since 1991. The recurrent NNs (RNNs) developed by his research groups at the Swiss AI Lab IDSIA & USI & SUPSI & TU Munich were the first RNNs to win official international contests. They have revolutionized connected handwriting recognition, speech recognition, machine translation, image caption generation, and are now in use at Google, Microsoft, IBM, Baidu, and many other companies. Two of the first four members of DeepMind (sold to Google for over 600M) were PhD students in his lab. His team's Deep Learners were also the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning & pattern recognition (more than any other team). They also were the first to learn control policies directly from high-dimensional sensory input using reinforcement learning.
His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. Since 2009 he has been member of the European Academy of Sciences and Arts. He has published 333 peer-reviewed papers, earned seven best paper/best video awards, the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE Neural Networks Pioneer Award. He is also president of NNAISENSE, which aims at building the first practical general purpose AI.
Tuesday, 26 July 2016 (1:00 - 2:00 pm), Ballroom A
Mapping spatio-temporal distribution of brain activation with high spatial resolution and high temporal resolution is of great importance for understanding the brain and aiding in the clinical diagnosis and management of brain disorders. Electrophysiological source imaging (ESI) from noninvasively recorded high density electroencephalogram (EEG) has played a significant role in advancing our ability to image brain function and dysfunction. Development in the past decades has led to the capability of localizing and imaging neural activation associated with cognition, sensory, and motor tasks in patients suffering from various neurological and mental disorders and healthy human subjects. We will discuss principles and current state of EEG-based ESI in localizing and imaging human brain activity with applications to cognitive neuroimaging and seizure localization. We will also discuss the merits and challenges of multimodal functional neuroimaging by integrating electrophysiological and hemodynamic measurements. Finally, we will discuss the co-localization of hemodynamic and electrophysiological signals, and discuss our recent progress in brain-computer interface, demonstrating that humans can control a quadcopter by “mind” from noninvasive EEG signals.
Bin He is Distinguished McKnight University Professor of Biomedical Engineering, Medtronic-Bakken Endowed Chair for Engineering in Medicine, director of the Institute for Engineering in Medicine, and director of the Center for Neuroengineering at the University of Minnesota. Dr. He has made significant research contributions to the fields of neuroengineering and functional biomedical imaging related to human neuroscience, including electrophysiological source imaging, multimodal neuroimaging, and noninvasive brain-computer interface. His lab demonstrated the capability of high-resolution imaging of spontaneous seizure from noninvasive EEG and of noninvasive controlling a quadcopter flying in the sky by “thoughts” alone. Dr. He has been recognized by the Academic Career Achievement Award from the IEEE Engineering in Medicine and Biology Society, the Established Investigator Award from the American Heart Association, and elected as a Fellow of International Academy of Medical and Biological Engineering, IEEE, and American Institute of Medical and Biological Engineering. Dr. He served as a Past President of the IEEE Engineering in Medicine and Biology Society, and is Chair-Elect of the International Academy of Medical and Biological Engineering. He serves as the Editor-in-Chief of IEEE Transactions on Biomedical Engineering.
Wednesday, 27 July 2016 (1:00 - 2:00 pm), Ballroom A
The emergence of interconnected cyber-physical systems and sensor/actuator networks has given rise to advanced automation applications, where a large amount of sensor data is collected and processed in order to make suitable real-time decisions and to achieve the desired control objectives. However, in situations where some components behave abnormally or become faulty, the overall automation process exhibits fragility, which may lead to serious degradation in performance or even to catastrophic system failures, especially due to cascaded effects of the interconnected subsystems. The goal of this presentation is to motivate the need for health monitoring, fault diagnosis and security of interconnected cyber-physical systems and to provide the first steps towards a unified theory for designing and analyzing fault-tolerant cyber-physical systems with complex nonlinear dynamics. Various detection, isolation and accommodation algorithms will be presented and illustrated, and directions for future research will be discussed.
Marios M. Polycarpou is a Professor of Electrical and Computer Engineering and the Director of the KIOS Research Center for Intelligent Systems and Networks at the University of Cyprus. He received bachelors degrees in Computer Science and in Electrical Engineering from Rice University, Houston, TX, in 1987, and his Ph.D. degree in Electrical Engineering from the University of Southern California, Los Angeles, CA, in 1992. His teaching and research interests are in intelligent systems and networks, computational intelligence, fault diagnosis and distributed agents, and adaptive and cooperative control systems. He has published more than 300 articles in refereed journals, edited books and refereed conference proceedings, and co-authored 7 books. He is also the holder of 6 patents. Prof. Polycarpou is a Fellow of the IEEE and has served as the President of the IEEE Computational Intelligence Society between Jan. 2012 – Dec. 2013. He has served as the Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems between 2004-2010. He is the recipient of the IEEE Neural Networks Pioneer Award for 2016. Prof. Polycarpou has participated in more than 60 research projects/grants, funded by several agencies and industry in Europe and the United States, including the prestigious European Research Council (ERC) Advanced Grant.
Thursday, 28 July 2016 (1:00 - 2:00 pm), Ballroom A
The past three decades witnessed the birth and growth of neurodynamic optimization which has emerged and matured as a powerful approach to real-time optimization due to its inherent nature of parallel and distributed information processing and the hardware realizability. Despite the success, almost all existing neurodynamic approaches work well only for convex and generalized-convex optimization problems with unimodal objective functions. Effective neurodynamic approach to constrained global optimization with multimodal objective functions is rarely available. In this talk, starting with the idea and motivation of neurodynamic optimization, I will review the historic review and present the state of the art of neurodynamic optimization with many individual models for convex and generalized convex optimization. In addition, I will present a multiple-time-scale neurodynamic approach to selected constrained optimization. Finally, I will introduce population-based collective neurodynamic approaches to constrained distributed and global optimization. By deploying a population of individual neurodynamic models with diversified initial states at a lower level coordinated by using some global search and information exchange rules (such as PSO and DE) at a upper level, it will be shown that many constrained global optimization problems could be solved effectively and efficiently.
Jun Wang is the Chair Professor Computational Intelligence in the Department of Computer Science at City University of Hong Kong. Prior to this position, he held various academic positions at Dalian University of Technology, Case Western Reserve University, University of North Dakota, and the Chinese University of Hong Kong. He also held various short-term visiting positions at USAF Armstrong Laboratory, RIKEN Brain Science Institute, Dalian University of Technology, Huazhong University of Science and Technology, and Shanghai Jiao Tong University (Changjiang Chair Professor). He received a B.S. degree in electrical engineering and an M.S. degree in systems engineering from Dalian University of Technology and his Ph.D. degree in systems engineering from Case Western Reserve University. His current research interests include neural networks and their applications. He published about 200 journal papers, 15 book chapters, 11 edited books, and numerous conference papers in these areas. He is the Editor-in-Chief of the IEEE Transactions on Cybernetics. He also served as an Associate Editor of the IEEE Transactions on Neural Networks (1999-2009), IEEE Transactions on Cybernetics and its predecessor (2003-2013), and IEEE Transactions on Systems, Man, and Cybernetics – Part C (2002–2005), as a member of the editorial board of Neural Networks (2012-2014), editorial advisory board of International Journal of Neural Systems (2006-2013. He was an organizer of several international conferences such as the General Chair of the 13th International Conference on Neural Information Processing (2006) and the 2008 IEEE World Congress on Computational Intelligence, and a Program Chair of the IEEE International Conference on Systems, Man, and Cybernetics (2012). He has been an IEEE Computational Intelligence Society Distinguished Lecturer (2010-2012, 2014-2016). In addition, he served as President of Asia Pacific Neural Network Assembly (APNNA) in 2006 and many organizations such as IEEE Fellow Committee; IEEE Computational Intelligence Society Awards Committee; IEEE Systems, Man, and Cybernetics Society Board of Governors, He is an IEEE Fellow, IAPR Fellow, and a recipient of an IEEE Transactions on Neural Networks Outstanding Paper Award and APNNA Outstanding Achievement Award in 2011, Neural Networks Pioneer Award from IEEE Computational Intelligence Society (2014), among others.
Friday, 29 July 2016 (1:00 - 2:00 pm), Ballroom A
With the recent development of brain research and modern technologies, scientists and engineers will hopefully find efficient ways to develop brain-like intelligent systems that are highly robust, adaptive, scalable, and fault tolerant to uncertain and unstructured environments. Yet, developing such truly intelligent systems requires significant research on both fundamental understanding of brain intelligence as well as complex engineering design. This talk aims to present the recent research developments in computational intelligence to advance the machine intelligence research and explore their wide applications in complex cyber physical systems across different domains.
Specifically, this talk will focus on a new adaptive dynamic programing (ADP) framework with rich internal goal representation for improved learning and optimization capability over time. This architecture integrates an internal goal generator network to provide a more informative and detailed internal value representation to support the decision-making process. Compared to the existing ADP approaches with a manual or “hand-crafted” reinforcement signal design, our approach can automatically and adaptively develop the internal reinforcement signal over time, therefore improving the learning and control performance. This internal goal network can also be designed in a hierarchical way, to provide a multi-level value representation. Under this ADP framework, I will present numerous applications including smart grid and human-robot interaction, to demonstrate its broader and far-reaching applications.
Haibo He is the Robert Haas Endowed Chair Professor and the Director of the Computational Intelligence and Self-Adaptive (CISA) Laboratory at the University of Rhode Island, Kingston, RI, USA. His primary research interests include computational intelligence and its applications to complex systems. He has published one sole-author book (Wiley), edited 1 book (Wiley-IEEE) and 6 conference proceedings (Springer), and authored/co-authors over 180 peer-reviewed journal and conference papers, including several highly cited papers in IEEE Transactions on Neural Networks and IEEE Transactions on Knowledge and Data Engineering, Cover Page Highlighted paper in IEEE Transactions on Information Forensics and Security, and Best Readings of the IEEE Communications Society. He has delivered more than 40 invited talks around the globe. His research has been covered by numerous national and international medias, such as IEEE Smart Grid Newsletter, The Wall Street Journal, China Central Television (CCTV), Providence Journal, Providence Business News, among others.
He has served the IEEE Computational Intelligence Society (CIS) at various capacities, including Chair of IEEE CIS Emergent Technologies Technical Committee (ETTC) (2015), Chair of IEEE CIS Neural Networks Technical Committee (NNTC) (2013 and 2014), Vice Chair of IEEE CIS Adaptive Dynamic Programming and Reinforcement Learning (ADPRL) Technical Committee (2012 and 2013), Editor of IEEE CIS Electronic Letter (E-letter) (2009 and 2010), Manager of IEEE CIS Website (2011 and 2012), among others. He was the General Chair of 2014 IEEE Symposium Series on Computational Intelligence (IEEE SSCI’14), Technical Program Co-Chair of 2015 International Joint Conference on Neural Networks (IJCNN’15), Program Co-Chair of 2014 International Joint Conference on Neural Networks (IJCNN’14), among others. He serves as an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems (2010 to 2015), IEEE Computational Intelligence Magazine (2015), and IEEE Transactions on Smart Grid (2010 to present). He is the Editor-in-Chief Elect of IEEE Transactions on Neural Networks and Learning Systems (2015).
He was a recipient of the IEEE International Conference on Communications (ICC) “Best Paper Award” (2014), IEEE CIS “Outstanding Early Career Award” (2014), National Science Foundation “Faculty Early Career Development (CAREER) Award” (2011), Providence Business News (PBN) “Rising Star Innovator” Award (2011), and “Best Master Thesis Award” of Hubei Province, China (2002).
Monday, 25 July 2016 (1:00 - 2:00 pm), Ballroom B
Imprecise database models including similarity based fuzzy models and rough set models are reviewed. Various entropy measures for these database models' content and responses to querying are then described. Aggregation of uncertainty representations are also considered with an overview of information theory metrics and the ranges of their values for extreme probability cases. Two approaches to probability-possibility aggregation are considered including the possibilistic conditioning of probability and possibility transformations. Information measures are used to compare the resultant probability to the original probability for three cases of possibility distributions.
Frederick E. Petry received BS and MS degrees in physics and a PhD in computer and information science from The Ohio State University in 1975. He is currently a computer scientist with the Naval Research Laboratory at the NASA Stennis Space Center. He has been on the computer science faculty of the University of Alabama in Huntsville, the Ohio State University, Tulane University and is an Emeritus Professor at Tulane. His recent research interests include representation of imprecision via fuzzy sets and rough sets in databases, GIS and other information systems, semantic web systems and artificial intelligence including genetic algorithms. Dr. Petry has over 350 scientific publications including 150 journal articles/book chapters and 9 books written or edited. His monograph on fuzzy databases has been widely recognized as a definitive volume on this topic. He has been a general or honorary chairperson of several international conferences and is an IEEE Life Fellow, IFSA Fellow and an ACM Distinguished Scientist. In 2002 he was chosen as the outstanding researcher of the year in the Tulane University School of Engineering and received the Naval Research Laboratory’s Berman Research Publication awards in 2004, 2008 and 2010. Dr. Petry was also selected for the 2016 IEEE CIS Fuzzy Systems Pioneer Award.
Tuesday, 26 July 2016 (1:00 - 2:00 pm), Ballroom B
This talk begins with a short review of clustering that emphasizes external cluster validity indices (CVIs). A method for generalizing external pair-based CVIS (e.g., the crisp Rand and Jacard indices) to evaluate soft partitions is described and illustrated. Two types of validation experiments conducted with synthetic and real world labeled data are discussed: "best c" (internal validation with labeled data), and "best I/E" (agreement between an internal and external CVI pair).
As is always the case in cluster validity, conclusions based on empirical evidence are at the mercy of the data, so the reported results might be invalid for different data sets and/or clustering models and algorithms. But much more importantly, we discovered during these tests that some external cluster validity indices are also at the mercy of the distribution of the ground truth itself. We believe that our study of this surprising fact is the first systematic analysis of a largely unknown but very important problem ~ bias due to the distribution of the ground truth partition.
Specifically, in addition to the well known bias in many external CVIs caused by monotonic dependency on c, the number of clusters in candidate partitions, there are two additional kinds of bias that can be caused by an unusual distribution of the clusters in the ground truth partition provided with labeled data. The most important ground truth bias is caused by imbalance (unequally sized labeled subsets). We demonstrate these effects with randomized experiments on 25 pair-based external CVIs. Then we provide a theoretical analysis of bias due to ground truth for several CVis by relating Rand's index to the Havrda-Charvat quadratic entropy.
Jim received the PhD in Applied Mathematics from Cornell University in 1973. Jim is past president of NAFIPS (North American Fuzzy Information Processing Society), IFSA (International Fuzzy Systems Association) and the IEEE CIS (Computational Intelligence Society as the NNC): founding editor the Int'l. Jo. Approximate Reasoning and the IEEE Transactions on Fuzzy Systems: Life fellow of the IEEE and IFSA; and a recipient of the IEEE 3rd Millennium, CIS Fuzzy Systems Pioneer, and technical field award Rosenblatt medals, and the IPMU Kempe de Feret Award. Jim retired in 2007, and will be coming to a university near you soon.
Jim's interests: woodworking, optimization, motorcycles, pattern recognition, cigars, clustering in big data, fishing, co-clustering, blues music, wireless sensor networks, gardening, cluster validity, poker and visual clustering.
Wednesday, 27 July 2016 (1:00 - 2:00 pm), Ballroom B
“Nature is pleased with simplicity. And nature is no dummy” (Isaac Newton); “…. I ask many questions, and when the answer is simple, then God is answering.” (Albert Einstein). These quotes by the two greatest scientists of all time suggest a very important point : “truth” is simple, the principle of parsimony. Taking inspiration from this, in science and technology when we make a model to explain some natural phenomena we should always look for a simple model that works. In other words, "Make everything as simple as possible, but not simpler." (Albert Einstein).
Sparse modelling is a particular manifestation of this principle of parsimony; it is one of the ways to realize “simple models”. The literature on sparse modelling in statistical learning is quite rich, but in fuzzy modelling, although, explicitly or implicitly, sometimes it is used as a good principle of design, it is not that active an area of research. In this talk, first I shall discuss very briefly a few illustrative approaches to classical sparse meddling in statistical learning and then talk about some of the attempts in fuzzy modelling. In this context, I shall consider three types of problems, classification, clustering and regression. Finally, I shall conclude with some results of our own investigation in sparse fuzzy modeling.
Nikhil R. Pal is a Professor in the Electronics and Communication Sciences Unit of the Indian Statistical Institute. His current research interest includes bioinformatics, brain science, fuzzy logic, pattern analysis, neural networks, and evolutionary computation.
He was the Editor-in-Chief of the IEEE Transactions on Fuzzy Systems for the period January 2005-December 2010. He has served/been serving on the editorial /advisory board/ steering committee of several journals including the International Journal of Approximate Reasoning, Applied Soft Computing, Neural Information Processing-Letters and Reviews, International Journal of Knowledge-Based Intelligent Engineering Systems, International Journal of Neural Systems, Fuzzy Sets and Systems ,International Journal of Intelligent Computing in Medical Sciences and Image Processing, Fuzzy Information and Engineering : An International Journal, IEEE Transactions on Fuzzy Systems and the IEEE Transactions on Systems Man and Cybernetics-B.
He has given many plenary/keynote speeches in different premier international conferences in the area of computational intelligence. He has served as the General Chair, Program Chair, and co-Program chair of several conferences.
He was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS) and was a member of the Administrative Committee of the IEEE CIS. At present he is the Vice President for Publications of the IEEE CIS.
He is a Fellow of the National Academy of Sciences, India, a Fellow of the Indian National Academy of Engineering, a Fellow of the Indian National Science Academy, a Fellow of the International Fuzzy Systems Association (IFSA), and a Fellow of the IEEE, USA.
Thursday, 28 July 2016 (1:00 - 2:00 pm), Ballroom B
This presentation highlights the value of fuzzy transfer learning methods and related algorithms for handling complex prediction problems in rapidly-changing data distribution and data-shortage situations. It provides a framework for utilizing previously-acquired knowledge to predict new but similar problems quickly and effectively by using fuzzy set techniques. It systematically presents developments in fuzzy set-based transfer learning methods for prediction, including fuzzy transfer learning-based prediction framework, fuzzy domain adaptation, fuzzy cross-domain adaptation, and in particular, cross-domain adaptive neuro-fuzzy inference system, and their respective applications. This presentation demonstrates the successful use of fuzzy techniques in facilitating the incorporation of approximation and expressiveness of data uncertainties within knowledge transfer, machine learning and data-driven decision support systems.
Professor Jie Lu is the Associate Dean in Research in the Faculty of Engineering and Information Technology at the University of Technology Sydney (UTS). She is also the Director of the Decision Systems and e-Service Intelligence (DeSI) Research Laboratory in the Centre for Quantum Computation & Intelligent Systems. Her main research interests lie in the area of decision support systems, recommender systems, prediction and early warning systems, fuzzy transfer learning, concept drift and web-based e-service intelligence. She has published six research books and 400 papers in refereed journals and conference proceedings. She has won seven Australian Research Council (ARC) discovery grants and 10 other research grants in the last 15 years. She received the first UTS Research Excellent Medal for Teaching and Research Integration in 2010 and other awards. She serves as Editor-In-Chief for Knowledge-Based Systems (Elsevier) and Editor-In-Chief for International Journal on Computational Intelligence Systems (Atlantis), and has delivered many keynote speeches at international conferences.
Friday, 29 July 2016 (1:00 - 2:00 pm), Ballroom B
Undoubtedly, fuzzy models and fuzzy modeling have gone a long way since the inception of fuzzy sets a half-century ago, encompassing today a plethora of concepts, design methodologies, algorithmic pursuits, and applications. There have been various views at fuzzy models, their roles and interpretations. Having this diversity in mind, we establish a retrospective overview of the area of fuzzy modeling and identify new directions. We develop a systematic picture showing how fuzzy modeling has evolved over the decades, identify the main trends present there and highlight the main challenges being tackled.
One among the current highly visible directions involves fuzzy models and fuzzy modeling realized with the aid of type-2 or interval-valued fuzzy sets. Following this line of thought, we revisit the existing practices of fuzzy modeling and cast them in the framework of Granular Computing, which gives rise to granular modeling. We introduce concepts of granular spaces, viz. spaces of granular parameters of the models and granular input spaces, which play a central role in granular models. A number of compelling arguments of conceptual and applied nature are brought forward. The design of granular spaces is elaborated on and a number of representative architectures, in particular granular fuzzy rule-based models, are included. A discussion on generalization of such spaces composed of higher type information granules is also covered.
Witold Pedrycz is a Professor and Canada Research Chair (CRC) in Computational Intelligence in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. He also holds an appointment of special professorship in the School of Computer Science, University of Nottingham, UK. In 2009, Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. In 2012 he was elected a Fellow of the Royal Society of Canada. Witold Pedrycz has been a member of numerous program committees of IEEE conferences in the area of fuzzy sets and neurocomputing. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Council. He is a recipient of the IEEE Canada Computer Engineering Medal 2008. In 2009 he has received a Cajastur Prize for Soft Computing from the European Centre for Soft Computing for “pioneering and multifaceted contributions to Granular Computing”. In 2013 has was awarded a Killam Prize. In the same year he received a Fuzzy Pioneer Award 2013 from the IEEE Computational Intelligence Society.
His main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data mining, fuzzy control, pattern recognition, knowledge-based neural networks, relational computing, and Software Engineering. He has published numerous papers in this area. He is also an author of 15 research monographs covering various aspects of Computational Intelligence, data mining, and Software Engineering.
Dr. Pedrycz is intensively involved in editorial activities. He is an Editor-in-Chief of Information Sciences, Editor-in-Chief of WIREs Data Mining and Knowledge Discovery (Wiley) and Co-Editor-in Chief of Granular Computing (Springer). He also currently serves as an Associate Editor of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of a number of international journals in the area of Computational Intelligence and intelligent systems.
Monday, 25 July 2016 (1:00 - 2:00 pm), Ballroom C
Many-objective optimization problems (ManyOPs) pose challenges to existing multi-objective evolutionary algorithms (MOEAs) in terms of convergence, diversity, and computational complexity. This talk presents a personal view towards various strategies and methods for coping with many objectives, from simple ideas of more efficient non-dominated sorting and nonlinear dimensionality reduction, to other simple ideas of a two-archive algorithm (i.e., Two_Arch2), which use two separate archives to focus on convergence and diversity respectively. Different selection principles (indicator-based and Pareto-based) are used in the two archives. A new Lp-norm based diversity maintenance scheme is introduced. Our experimental results show that Two_Arch2 can cope with ManyOPs (up to 20 objectives) with satisfactory convergence, diversity, and complexity.
Xin Yao is a Professor of Computer Science at the University of Birmingham, UK. His main research interests include evolutionary computation and ensemble learning. He is an IEEE Fellow and a Distinguished Lecturer of IEEE Computational Intelligence Society. He has been interested in many objective optimisation since his EMO'03 paper, jointly with Khare and Deb. His students and he developed the two archive algorithm in 2006 and its improved version in 2015. He has also worked with his students on non-dominated sorting, harmonic distance-based MOEAs, nonlinear dimension reduction, alternative dominance relationships, and various applications of multi-objective optimisation and multi-objective learning, especially in software engineering.
Tuesday, 26 July 2016 (1:00 - 2:00 pm), Ballroom C
Evolutionary machine learning occupies a relatively new niche for evolutionary computation providing enabling technology for extracting insights from data. Its progress will be accelerated by open-source, cloud-scaled platforms. FlexGP and FCUBE are example modeling and classification platforms that enable high dimensional, high volume machine learning with genetic programming and any contributed evolutionary learner, respectively. Genetic programming also lends itself well to feature synthesis.
Una-May O'Reilly received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005. She has served on the GECCO business committee, co-led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co-chaired EuroGP, the largest conference devoted to Genetic Programming. In 2013 she inaugurated the Women in Evolutionary Computation group at GECCO. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.
Wednesday, 27 July 2016 (1:00 - 2:00 pm), Ballroom C
In this talk, we discusses the main challenges in evolutionary optimization of complex real-world problems, including the complexities in formulating the optimization problems, high computational cost, large scale in both decision and objective space, and uncertainties. We then provide some example solutions that have been developed over the past decade, illustrated by a variety of real-world optimization problems such as design of micro heat exchangers, turbine blades, high-lift wing systems. Finally, we present some ideas in data driven evolutionary optimization with an example of design of trauma systems.
Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996, respectively, and the Dr.-Ing. degree from Ruhr-University Bochum, Bochum, Germany, in 2001.
He is currently Professor in Computational Intelligence with the Department of Computer Science, University of Surrey, Guildford, UK, where he is Head of the Nature Inspired Computing and Engineering (NICE) Group. He is presently also a Finland Distinguished Professor, University of Jyvaskyla, Finland, and Changjiang Distinguished Visiting Professor, Northeastern University, China.
Dr Jin is the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems and the Editor-in-Chief of Complex & Intelligent Systems (Springer). He is also an Associate Editor of IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Transactions on Nanobioscience, and an Editorial Board Member of Evolutionary Computation (MIT).
He was Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015) and an IEEE Distinguished Lecturer (2013-2015). He is a Fellow of IEEE.
Thursday, 28 July 2016 (1:00 - 2:00 pm), Ballroom C
Evolutionary Computation (EC) has been part of the research agenda for at least 60 years but if you ask the average EC researcher to name three examples of EC being used in real world applications they might struggle. Other technologies have seen much wider adoption. 3D printing is changing the way that manufacturing is done, moving some of that functionality into the home. Immersive reality is on the verge of changing society, in ways that are not totally clear yet. Ubiquitous computing is becoming more prevalent, enabling users to access computing resources in ways that were unimaginable even just a few years ago. It might be argued that EC has not had the same penetration as other technologies. In this talk, we will look back at what EC promised, see if it has delivered on that promise and compare its progress with other technologies. Finally, we will suggest some challenges that might further advance EC and enable its wider adoption.
Professor Graham Kendall is the Vice-Provost (Research and Knowledge Transfer) at the University of Nottingham Malaysia Campus and a Professor of Computer Science at the University of Nottingham. He is a member of the Automated Scheduling, Optimisation and Planning (ASAP) Research Group
Graham is a distinguished Professor at the Open University Hong Kong and an honorary Professor at Amity University, India.
He received his BSc (Hons) in Computation from UMIST, UK in 1997 and his PhD from the University of Nottingham in 2000. He was made a full Professor in 2007 and moved to Malaysia in 2011, where he plans to be until at least 2019, when he will return to the UK.
Graham is the Editor-in-Chief of the IEEE Transactions of Computational Intelligence and AI on Games and an Associate Editor on ten other journals, including the IEEE Transactions on Evolutionary Computation.
His research interest lies in at the intersection of Evolutionary Computation and Operations Research, having published over 220 peer reviewed papers across these two domains.
More details can be seen here.
Friday, 29 July 2016 (1:00 - 2:00 pm), Ballroom C
Swarm intelligence is one of the popular approaches in computational intelligence. It models a collective behavior of simple individuals with simple rules and local interactions. In nature, swarm intelligent systems are known to be flexible, robust against failures and adaptive to the changes in the environment. This talk presents the theory of swarm intelligence and its application in technical systems. The major focus is on the methodologies for social and cooperative behavior of technical systems in unknown environments. Particularly, multi-objective decision-making at runtime in dynamic environments will be addressed. The application and the corresponding challenges in swarm robotics using flying objects and real-time simulations in dynamic environments will be presented.
Sanaz Mostaghim is a professor of computer science at the Otto von Guericke University Magdeburg, Germany. She holds a PhD degree (2004) in electrical engineering from the University of Paderborn, Germany. Sanaz has worked as a postdoctoral fellow at ETH Zurich in Switzerland (2004-2006) and as a lecturer at Karlsruhe Institute of Technology (KIT), Germany (2006-2013), where she received her habilitation degree in applied computer science in 2012. Her research interests are in the area of swarm intelligence, evolutionary multi-objective optimization, and their applications in robotics, science and industry. Sanaz is an active member of IEEE Computational Intelligence Society (CIS) and currently serves as a member of the Administration Committee (AdCom). She is associate editor of IEEE Transactions on evolutionary computation, IEEE Transactions on cybernetics, Swarm Intelligence and Evolutionary Computation Journal, and member of the editorial board of springer Journal on Complex and Intelligent Systems. Sanaz is the chair of Women in Computational Intelligence (WCI) and IEEE CIS task force on evolutionary multi-objective optimization.