IEEE CIS Page IEEE SCV Page IEEE-Egrid IEEE-Egrid


Augmenting Cognition through Data Visualization

Alark Joshi, Associate Professor in the Department of Computer Science at the University of San Francisco

Date & Time: Thursday, May 17, 2018, 6:00 PM – 8:00 PM PDT

Location: Intel SC12, 3600 Juliette Ln, Santa Clara, CA 95054

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: As datasets get larger, it has become increasingly important to manage visual attention and augment cognition. My research has focused on developing novel techniques to draw a viewer's attention to regions of interest in large datasets. These techniques are specifically designed to increase the efficient completion of tasks and to improve the user experience. In this talk, I will present my approaches to solving real-world problems in the fields of atmospheric physics, neurosurgery and mobile data visualization. Our techniques provide temporal context to atmospheric physicists for improved hurricane exploration. I will also present augmented visualization techniques for image-guided navigation during neurosurgery that have resulted in an increased accuracy and confidence among neurosurgeons as compared to the state-of-the-art. I will also present new work from our lab on foveated visualization for rapid exploration that addresses computational complexity when visualizing large datasets.

Biography: Alark Joshi Alark Joshi is an Associate Professor in the Department of Computer Science at the University of San Francisco, where he works on data visualization projects for improved neurosurgical planning and treatment. His research focuses on developing and evaluating the ability of novel visualization techniques to communicate information for effective decision making and discovery. His work has led to novel visualization techniques in fields as diverse as computational fluid dynamics, atmospheric physics, medical imaging and cell biology. He received his postdoctoral training at Yale University and Ph.D in Computer Science from the University of Maryland Baltimore County.


Running Sparse and Low-Precision Neural Networks: An Interactive Play between Software and Hardware

Hai "Helen" Li, Associate Professor in the Department of Electrical and Computer Engineering, Duke University

Date & Time: Thursday, March 8, 2018, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, demonstrating an interactive play between software and hardware.

Biography: Hai “Helen” Li received the B.S. and M.S. degrees from Tsinghua University, Beijing, China, and the Ph.D. degree from the Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, in 2004. She is currently Clare Boothe Luce Associate Professor with the Department of Electrical and Computer Engineering at Duke University, Durham, NC, USA. She was with Qualcomm Inc., San Diego, CA, USA, Intel Corporation, Santa Clara, CA, Seagate Technology, Bloomington, MN, USA, the Polytechnic Institute of New York University, Brooklyn, NY, USA, and the University of Pittsburgh, Pittsburgh, PA, USA. She has authored or co-authored over 200 technical papers published in peer-reviewed journals and conferences and holds 70+ granted U.S. patents.

Professor Hai “Helen” Li authored a book entitled Nonvolatile Memory Design: Magnetic, Resistive, and Phase Changing (CRC Press, 2011). Her current research interests include memory design and architecture, neuromorphic architecture for brain-inspired computing systems, and architecture/circuit/device cross-layer optimization for low power and high performance. Dr. Li serves as an Associate Editor of TVLSI, TCAD, TODAES, TMSCS, TECS, CEM, and the IET Cyber-Physical Systems: Theory & Applications. She has served as organization committee and technical program committee members for over 30 international conference series. She received the NSF CAREER Award (2012), the DARPA YFA Award (2013), TUM-IAS Hans Fisher Fellowship (2017), seven best paper awards and another seven best paper nominations. Dr. Li is a senior member of IEEE and a distinguished member of ACM. She authored a book entitled Nonvolatile Memory Design: Magnetic, Resistive, and Phase Changing (CRC Press, 2011). Her current research interests include memory design and architecture, neuromorphic architecture for brain-inspired computing systems, and architecture/circuit/device cross-layer optimization for low power and high performance. Dr. Li serves as an Associate Editor of TVLSI, TCAD, TODAES, TMSCS, TECS, CEM, and the IET Cyber-Physical Systems: Theory & Applications. She has served as organization committee and technical program committee members for over 30 international conference series. She received the NSF CAREER Award (2012), the DARPA YFA Award (2013), TUM-IAS Hans Fisher Fellowship (2017), seven best paper awards and another seven best paper nominations. Dr. Li is a senior member of IEEE and a distinguished member of ACM.

Deep Learning in Biomedicine and Genomics: An Introduction and Applications to Next-generation Sequencing and Disease Diagnostics

Mark DePristo, Head of Deep Learning for Genetics and Genomics at Google

Date & Time: Monday, December 4, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - free
         IEEE (non-CIS) members - $5 donation
         Non-members - $5 pay at the door

Unfortunately Slides and YouTube video is not available at this time for this event.

Abstract:
  • We will review the history and taxonomy of machine learning and artificial intelligence
  • We will introduce deep learning, covering both what it is and why its so exciting.
  • We will review a highlight a few deep learning applications to biomedical problems across the field
  • We will do a deep dive into three recent deep learning applications from Google Brain:
  • Detection of cancer cells in pathology images
  • Detection of diabetic retinopathy from fundus images of the eye
  • Calling SNP and indel variants in next-generation sequencing data

Biography: I'm Mark DePristo [LinkedIn], a Google software engineer since 2015. I lead the Google Brain Genomics team where we work on advancing the capabilities and applications of deep learning tech in TensorFlow for genomics problems. Before joining Google I was Vice President of Informatics at SynapDx, a Google Ventures-backed startup developing a blood-based test for Autism. And before that I was Co-Director of Medical and Population Genetics at the Broad Institute where I created and led the ~10 person team that developed the GATK, the dominant software for processing next-generation DNA sequencing data. I have an BA in Computer Science and Math from Northwestern, a PhD in Biochemistry from University of Cambridge where I was a Marshall fellow, and finally postdoc'd at Harvard to study antibiotic resistance evolution. My academic articles are widely published with more than 28,000 citations.


Learning with limited supervision

Stefano Ermon, Assistant Professor, Department of Computer Science, Stanford University

Date & Time: Wednesday, November 1, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - free
         IEEE (non-CIS) members - $5 donation
         Non-members - $5 pay at the door

Abstract: Many of the recent successes of machine learning have been characterized by the availability of large quantities of labeled data. Nonetheless, we observe that humans are often able to learn with very few labeled examples or with only high level instructions for how a task should be performed. In this talk, I will present some new approaches for learning useful models in contexts where labeled training data is scarce or not available at all. I will first discuss and formally prove some limitations of existing training criteria used for learning hierarchical generative models. I will then introduce novel architectures and methods to overcome these limitations, allowing us to learn a hierarchy of interpretable features from unalebeld data. Finally, I will discuss ways to use prior knowledge (such as physics laws or simulators) to provide weak forms of supervision, showing how we can learn to solve useful tasks, including object tracking, without any labeled data.

Biography: Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano's research has won several awards, including three Best Paper Awards, a World Bank Big Data Innovation Challenge, and was selected by Scientific American as one of the 10 World Changing Ideas in 2016. He is a recipient of the Sony Faculty Innovation Award and NSF CAREER Award.


Hebbian Learning and the LMS Algorithm

Prof. Bernard Widrow, Professor of Electrical Engineering, Emeritus, Stanford University

Date & Time: Tuesday, September 26, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Directions: TI-BldgE-Auditorium.pdf

Registration Link: (Mandatory) https://ieeecisscv_sept2017.eventbrite.com/

Registration Fee: IEEE members: Free
         Non-members: $5 (pay at door) You do not need to be an IEEE member to attend!

Abstract: Hebb's learning rule can be summarized as "neurons that fire together wire together." Wire together means that the weight of the synaptic connection between any two neurons is increased when both are firing. Hebb's rule is a form of unsupervised learning. Hebb introduced the concept of synaptic plasticity, and his rule is widely accepted in the field of neurobiology.

When imagining a neural network trained with this rule, a question naturally arises. What is learned with "fire together wire together," and what purpose could this rule actually have? Not having a good answer has long kept Hebbian learning from engineering applications. The issue is taken up here and possible answers will be forthcoming.

Strictly following Hebb's rule, weights could only increase, never decrease. This would eventually cause all weights to saturate, yielding a useless network. When extending Hebb's rule to make it workable, it was discovered that extended Hebbian learning could be implemented by means of the LMS algorithm. The result was the Hebbian-LMS algorithm.

The LMS (least mean square) algorithm was discovered by Widrow and Hoff in 1959, ten years after Hebb's classic book first appeared. The LMS algorithm optimizes with gradient descent. It is the most widely used learning algorithm today. It has been applied in telecommunications systems, control systems, signal processing, adaptive noise cancelling, adaptive antenna arrays, etc. It is at the foundation of the backpropagation algorithm of Paul Werbos.

Hebb's rule notwithstanding, the nature of the learning algorithm(s) that adapt and control the strength of synaptic connections in animal brains is for the most part unknown. The biochemistry of synaptic plasticity is largely understood, but the overall control algorithm is not understood. A solution to this mystery might be the Hebbian-LMS algorithm, a control process for unsupervised training of neural networks that perform clustering. Considering the structure of neurons, synapses, and neurotransmitters, the electrical and chemical signals necessary for the implementation of the Hebbian-LMS algorithm seem to be all there. Hebbian-LMS seems to be a natural algorithm. It is proving to be a simple useful algorithm that is easy to make work. Neuron to neuron connections are as simple as can be. All this raises a question. Could a brain or major portion of a brain be implemented with basic building blocks that perform clustering? Is clustering nature's fundamental neurological building block?

On the engineering side, layered neural networks trained with Hebbian-LMS have been simulated. Hidden layers are trained, unsupervised, with Hebbian-LMS while the output layer is trained with classic LMS, supervised. The hidden layers perform clustering. The output layer is fed clustered inputs, and from this makes the final classification decisions. Networks that are not layered, for example randomly connected, can be implemented with Hebbian-LMS neurons to provide inputs to an output classifier. The same training algorithm could be utilized.

The Hebbian-LMS network is a general purpose trainable classifier and gives performance comparable to a layered network trained with the backpropagation algorithm. The Hebbian-LMS network is much simpler to implement and easier to make work. It is early to predict, but it seems highly likely that Hebbian-LMS will have many engineering applications to clustering, pattern classification, signal processing, control systems, and to machine learning.

Biography: Bernard Widrow received the S.B., S.M., and Sc.D. degrees in Electrical Engineering from the Massachusetts Institute of Technology in 1951, 1953, and 1956, respectively. He joined the MIT faculty and taught there from 1956 to 1959. In 1959, he joined the faculty of Stanford University, where he is currently Professor of Electrical Engineering, Emeritus.

He began research on adaptive digital filters, learning processes, and artificial neural networks in 1957. Together with M.E.Hoff, Jr. his first doctoral student at Stanford, he invented the LMS algorithm in the autumn of 1959. Today, this is the most widely used learning algorithm, used in every MODEM in the world. He has continued working on adaptive signal processing, adaptive noise cancelling, adaptive antennas, adaptive controls, and neural networks, since that time.

Dr. Widrow is a Life Fellow of the IEEE and a Fellow of AAAS. He received the IEEE Centennial Medal in 1984, the Alexander Graham Medal in 1986, the IEEE Signal Processing Society Medal in 1986, the IEEE Neural Networks Pioneer Medal in 1991, the IEEE Millennium Medal in 2000, and the Benjamin Franklin Medal for Engineering from the Franklin Institute of Philadelphia in 2001. He was inducted into the National Academy of Engineering in 1995, and the Silicon Valley Engineering Hall of Fame in 1999.

Dr. Widrow is a past president and past member of the Governing Board of the International Neural Network Society. He is associate editor of several journals and is the author of more than 125 technical papers and 21 patents. He is co-author of the Prentice-Hall book 'Adaptive Signal Processing', the IEEE Press book 'Adaptive Inverse Control', and the Cambridge University Press book Quantization Noise.