IEEE CIS Page IEEE SCV Page IEEE-Egrid IEEE-Egrid

Co-Sponsored Event

IEEE Workshops on Machine Learning, Convolutional Neural Networks, and Tensorflow

Dr. Kiran Gunnam, IEEE Distinguished Speaker and Distinguished Engineer - Machine Learning & Computer Vision at Western Digital

Event Co-Sponsors:

  • IEEE Silicon Valley Chapters of ComSoC, ITSoC, CIS
  • Apollo AI [Disrupting the automotive industry with groundbreaking solutions. Building low-cost super-safe self-driving system and HD maps]

Date & Time: Monday September 24 & Tuesday september 25, 4:00 PM – 9:00 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS/RAS/VTS members: free
       TI Employee: Workshop I and II: $45; Workshop I Only: $27; Workshop II (Only): $27
       IEEE members: Workshop I and II: $270; Workshop I Only: $180; Workshop II (Only): $180
       IEEE ComsSoC,ITSoC,CIS members: Workshop I and II: $225; Workshop I(Only):$157.50;Workshop II(Only):$157.50
       Non-members: Workshop I and II: $315; Workshop I Only: $202.50; Workshop II (Only): $202.50

Abstract: For the detailed list of topics covered, please see the link. Course slides in PDF and other workshop materials will be shared with registered attendees 5-days before the course. In addition, workshop materials with Tensorflow installation are provided also as docker image to have a worry free setup. Attendees for workshop 2 should bring their own laptop prepared with provided docker image or Tensorflow+provided examples.

Biography: Dr. Kiran Gunnam is an innovative technology leader with vision and passion who effectively connects with individuals and groups. Dr. Gunnam's breakthrough contributions are in the areas of advanced error correction systems, storage class memory systems and computer vision based localization & navigation systems. He has helped drive organizations to become industry leaders through ground-breaking technologies. Dr. Gunnam has 75 issued patents and 100+ patent applications/invention disclosures on algorithms, architectures and real-time low-cost implementations for computing, storage and computer vision systems. He is the lead inventor/sole inventor for 90% of them. Dr. Gunnam’s patented work has been already incorporated in more than 2 billion data storage and WiFi chips and is set to continue to be incorporated in more than 500 million chips per year. Dr. Gunnam is also a key contributor to the precise localization and navigation technology commercialized for autonomous aerial refueling and space docking applications. His recent patent pending inventions on low-complexity simultaneous localization and mapping (SLAM) and 3D convolutional neural network (CNN) for object detection, tracking and classification are being commercialized for LiDAR+camera based perception for autonomous driving and robotic systems. Dr. Gunnam received his MSEE and PhD in Computer Engineering from Texas A&M University, College Station. He is world-renowned for balance between strong analytical ability and pragmatic insight into implementation of advanced technology. He served as IEEE Distinguished Speaker and Plenary Speaker for 25+ events and international conferences and more than 3000 attendees in USA, Canada and Asia benefited from his lecture talks. He also teaches graduate level course focused on machine learning systems at Santa Clara University.


On the Role of Structure in Learning for Robot Manipulation

Jeannette Bohg, Stanford University

Event Co-Sponsors:

Date & Time: Thursday, September 20, 2018, 6:00 PM – 8:00 PM PDT

Location: Intel SC12, 3600 Juliette Ln, Santa Clara, CA 95054

Registration Fee: IEEE CIS/RAS/VTS members: free
         Students - free
         IEEE members - free
         Non-members - $10 (Register at Door $15)

Abstract: Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment. First, this creates a rich sensory signal that would otherwise not be present. Second, knowledge of the sensory dynamics upon interaction allows prediction and decision-making over a longer time horizon. To exploit these benefits of Interactive Perception for capable robotic manipulation, a robot requires both: methods for processing rich, sensory feedback and feedforward predictors of the effect of physical interaction. In the first part of this talk, I will present a method for motion-based segmentation of an unknown number of simultaneously moving objects. The underlying model estimates dense, per-pixel scene flow that is then followed by clustering in motion trajectory space. We show how this outperforms state-of-the-art in scene flow estimation and multi-object segmentation. In the second part, I will present a method for predicting the effect of physical interaction with objects in the environment. The underlying model combines an analytical physics model and a learned perception part. In extensive experiments, we show how this hybrid model outperforms purely learned models in terms of generalisation. In both projects, we found that introducing structure greatly reduces training data, eases learning and provides extrapolation. Based on these findings, I will discuss the role of structure in learning for robot manipulation.

Biography: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at MPI until September 2017 and remains affiliated as a guest researcher. Her research focuses on perception for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Before joining the Autonomous Motion lab in January 2012, Jeannette Bohg was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.


When your big data seems too small: accurate inferences beyond the empirical distribution

Gregory Valiant, Stanford University

Event Co-Sponsors:

Date & Time: Thursday, August 23, 2018, 6:00 PM – 8:00 PM PDT

Location: Intel SC12, 3600 Juliette Ln, Santa Clara, CA 95054

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: We discuss several problems related to the general challenge of making accurate inferences about a complex phenomenon, in the regime in which the amount of available data (i.e the sample size) is too small for the empirical distribution of the samples to be an accurate representation of the phenomenon in question. We show that for several fundamental and practically relevant settings, including estimating the covariance structure of a high-dimensional distribution, and learning a population of distributions given few data points from each individual, it is possible to ``denoise'' the empirical distribution significantly. We will also discuss the problem of estimating the ``learnability'' of a dataset: given too little labeled data to train an accurate model, we show that it is often possible to estimate the extent to which a good model exists. Framed differently, even in the regime in which there is insufficient data to learn, it is possible to estimate the performance that could be achieved if additional data (drawn from the same data source) were obtained. Our results, while theoretical, have a number of practical applications, and we also discuss some of these applications.

Biography: Gregory Valiant is an assistant professor of Computer Science at Stanford University. His current research interests span algorithms, statistics, and machine learning, with an emphasis on developing algorithms and information theoretic lower bounds for a variety of fundamental data-centric tasks. Recently, this work has also included questions of how to robustly extract meaningful information from untrusted datasets that might contain a significant fraction of corrupted or arbitrarily biased data points. Prior to joining Stanford, Gregory completed his PhD at UC Berkeley in 2012, and was a postdoctoral researcher at Microsoft Research, New England. He has received several honors, including the ACM Dissertation Award Honorable Mention, NSF Career Award, and Sloan Foundation Fellowship.


Interpretable, integrative deep learning for decoding the human genome

Anshul Kundaje, Stanford University

Event Co-Sponsors:

Date & Time: Wednesday, August 1, 2018, 6:00 PM – 8:00 PM PDT

Location: Intel SC9 - Auditorium(Santa Clara 9), 2250 Mission College Blvd, Santa Clara, CA 95054

Registration Fee: IEEE CIS members: free
         Students - free
         IEEE (non-CIS) members - free
         Non-members - free

Abstract: The human genome contains the fundamental code that defines the identity and function of all the cell types and tissues in the human body. Genes are functional sequence units that encode for proteins. But they account for just about 2% of the 3 billion long human genome sequence. What does the rest of the genome encode? How is gene activity controlled in each cell type? Where do the regulatory control elements lie and what is their sequence composition? How do variants and mutations in the genome sequence affect cellular function and disease? These are fundamental questions that remain largely unanswered. The regulatory code that controls gene activity is made up complex genome sequence grammars representing hierarchically organized units of regulatory elements. These functional words and grammars are sparsely distributed across billions of nucleotides of genomic sequence and remain largely elusive. Deep learning has revolutionized our understanding of natural language, speech and vision. We strongly believe it has the potential to revolutionize our understanding of the regulatory language of the genome. We have developed integrative supervised deep learning frameworks to learn how genomic sequence encodes millions of experimentally measured regulatory genomic events across 100s of cell types and tissues. We have developed novel methods to interpret our models and extract local and global predictive patterns revealing many insights into the regulatory code. We demonstrate how our deep learning models can reveal the regulatory code that controls differentiation and identity of diverse blood cell types. Our models also allow us to predict the effects of natural and disease-associated genetic variation i.e. how differences in DNA sequence across healthy and diseased individuals are likely to affect molecular mechanisms associated with complex traits and diseases.

Biography: Anshul Kundaje is an Assistant Professor of Genetics and Computer Science at Stanford University. The Kundaje lab develops statistical and machine learning methods for large-scale integrative analysis of functional genomic data to decode regulatory elements and pathways across diverse cell types and tissues and understand their role in cellular function and disease. Anshul completed his Ph.D. in Computer Science in 2008 from Columbia University. As a postdoc at Stanford University from 2008-2012 and a research scientist at MIT and the Broad Institute from 2012-2014, he led the integrative analysis efforts for two of the largest functional genomics consortia - The Encyclopedia of DNA Elements (ENCODE) and The Roadmap Epigenomics Project. Dr. Kundaje is a recipient of the 2016 NIH Director’s New Innovator Award and The 2014 Alfred Sloan Foundation Fellowship.


Neuromorphic Chips - Addressing the Nanotransistor Challenge by Combining Analog Computation with Digital Communication

Kwabena Boahen, Stanford University

Event Co-Sponsors:

Date & Time: Wednesday, June 27, 2018, 6:00 PM – 8:00 PM PDT

Location: H2O.ai - 2307 Leghorn St, Mountain View, CA 94043

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: As transistors shrink to nanoscale dimensions, trapped electrons--blocking "lanes" of electron traffic--are making it difficult for digital computers to work. In stark contrast, the brain works fine with single-lane nanoscale devices that are intermittently blocked (ion channels). Conjecturing that it achieves error-tolerance by combining analog dendritic computation with digital axonal communication, neuromorphic engineers (neuromorphs) began emulating dendrites with subthreshold analog circuits and axons with asynchronous digital circuits in the mid-1980s. Three decades in, they achieved a consequential scale with Neurogrid, the first neuromorphic system with billions of synaptic connections. Neuromorphs then tackled the challenge of mapping arbitrary computations onto neuromorphic chips in a manner robust to lanes intermittently--or even permanently--blocked by trapped electrons. Having demonstrated scalability and programmability, they now seek to encode continuous signals with spike trains in a manner that promises greater energy efficiency than all-analog or all-digital computing across a five-decade precision range.

Biography: Kwabena Boahen is a Professor of Bioengineering and Electrical Engineering at Stanford University, where he directs the Brains in Silicon Lab. He is a neuromorphic engineer who is using silicon integrated circuits to emulate the way neurons compute, and linking the seemingly disparate fields of electronics and computer science with neurobiology and medicine. His lab developed Neurogrid, a specialized hardware platform created at Stanford that enables the cortex's inner workings to be simulated in real time--something outside the reach of even the fastest supercomputers. His interest in neural nets developed soon after he left his native Ghana to pursue undergraduate studies in Electrical and Computer Engineering at Johns Hopkins University, Baltimore, in 1985. He went on to earn a doctorate in Computation and Neural Systems at the California Institute of Technology in 1997. From 1997 to 2005 he was on the faculty of University of Pennsylvania, Philadelphia PA. With over ninety publications to his name, including a cover story in the May 2005 issue of Scientific American, his scholarship has been recognized by several distinguished honors, including the National Institute of Health Director's Pioneer Award in 2006. In 2016, he was named a fellow of the Institute of Electrical and Electronic Engineers and of the American Institute for Medical and Biological Engineering. His 2007 TED talk, A Computer that Works like the Brain, has been viewed over half-a-million times.


Augmenting Cognition through Data Visualization

Alark Joshi, Associate Professor in the Department of Computer Science at the University of San Francisco

Date & Time: Thursday, May 17, 2018, 6:00 PM – 8:00 PM PDT

Location: Intel SC12, 3600 Juliette Ln, Santa Clara, CA 95054

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: As datasets get larger, it has become increasingly important to manage visual attention and augment cognition. My research has focused on developing novel techniques to draw a viewer's attention to regions of interest in large datasets. These techniques are specifically designed to increase the efficient completion of tasks and to improve the user experience. In this talk, I will present my approaches to solving real-world problems in the fields of atmospheric physics, neurosurgery and mobile data visualization. Our techniques provide temporal context to atmospheric physicists for improved hurricane exploration. I will also present augmented visualization techniques for image-guided navigation during neurosurgery that have resulted in an increased accuracy and confidence among neurosurgeons as compared to the state-of-the-art. I will also present new work from our lab on foveated visualization for rapid exploration that addresses computational complexity when visualizing large datasets.

Biography: Alark Joshi Alark Joshi is an Associate Professor in the Department of Computer Science at the University of San Francisco, where he works on data visualization projects for improved neurosurgical planning and treatment. His research focuses on developing and evaluating the ability of novel visualization techniques to communicate information for effective decision making and discovery. His work has led to novel visualization techniques in fields as diverse as computational fluid dynamics, atmospheric physics, medical imaging and cell biology. He received his postdoctoral training at Yale University and Ph.D in Computer Science from the University of Maryland Baltimore County.


Running Sparse and Low-Precision Neural Networks: An Interactive Play between Software and Hardware

Hai "Helen" Li, Associate Professor in the Department of Electrical and Computer Engineering, Duke University

Date & Time: Thursday, March 8, 2018, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - $3 (Register at Door $3)
         IEEE (non-CIS) members - $7 donation (Register at Door $10)
         Non-members - $10 (Register at Door $15)

Abstract: Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, demonstrating an interactive play between software and hardware.

Biography: Hai “Helen” Li received the B.S. and M.S. degrees from Tsinghua University, Beijing, China, and the Ph.D. degree from the Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, in 2004. She is currently Clare Boothe Luce Associate Professor with the Department of Electrical and Computer Engineering at Duke University, Durham, NC, USA. She was with Qualcomm Inc., San Diego, CA, USA, Intel Corporation, Santa Clara, CA, Seagate Technology, Bloomington, MN, USA, the Polytechnic Institute of New York University, Brooklyn, NY, USA, and the University of Pittsburgh, Pittsburgh, PA, USA. She has authored or co-authored over 200 technical papers published in peer-reviewed journals and conferences and holds 70+ granted U.S. patents.

Professor Hai “Helen” Li authored a book entitled Nonvolatile Memory Design: Magnetic, Resistive, and Phase Changing (CRC Press, 2011). Her current research interests include memory design and architecture, neuromorphic architecture for brain-inspired computing systems, and architecture/circuit/device cross-layer optimization for low power and high performance. Dr. Li serves as an Associate Editor of TVLSI, TCAD, TODAES, TMSCS, TECS, CEM, and the IET Cyber-Physical Systems: Theory & Applications. She has served as organization committee and technical program committee members for over 30 international conference series. She received the NSF CAREER Award (2012), the DARPA YFA Award (2013), TUM-IAS Hans Fisher Fellowship (2017), seven best paper awards and another seven best paper nominations. Dr. Li is a senior member of IEEE and a distinguished member of ACM. She authored a book entitled Nonvolatile Memory Design: Magnetic, Resistive, and Phase Changing (CRC Press, 2011). Her current research interests include memory design and architecture, neuromorphic architecture for brain-inspired computing systems, and architecture/circuit/device cross-layer optimization for low power and high performance. Dr. Li serves as an Associate Editor of TVLSI, TCAD, TODAES, TMSCS, TECS, CEM, and the IET Cyber-Physical Systems: Theory & Applications. She has served as organization committee and technical program committee members for over 30 international conference series. She received the NSF CAREER Award (2012), the DARPA YFA Award (2013), TUM-IAS Hans Fisher Fellowship (2017), seven best paper awards and another seven best paper nominations. Dr. Li is a senior member of IEEE and a distinguished member of ACM.

Deep Learning in Biomedicine and Genomics: An Introduction and Applications to Next-generation Sequencing and Disease Diagnostics

Mark DePristo, Head of Deep Learning for Genetics and Genomics at Google

Date & Time: Monday, December 4, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - free
         IEEE (non-CIS) members - $5 donation
         Non-members - $5 pay at the door

Unfortunately Slides and YouTube video is not available at this time for this event.

Abstract:
  • We will review the history and taxonomy of machine learning and artificial intelligence
  • We will introduce deep learning, covering both what it is and why its so exciting.
  • We will review a highlight a few deep learning applications to biomedical problems across the field
  • We will do a deep dive into three recent deep learning applications from Google Brain:
  • Detection of cancer cells in pathology images
  • Detection of diabetic retinopathy from fundus images of the eye
  • Calling SNP and indel variants in next-generation sequencing data

Biography: I'm Mark DePristo [LinkedIn], a Google software engineer since 2015. I lead the Google Brain Genomics team where we work on advancing the capabilities and applications of deep learning tech in TensorFlow for genomics problems. Before joining Google I was Vice President of Informatics at SynapDx, a Google Ventures-backed startup developing a blood-based test for Autism. And before that I was Co-Director of Medical and Population Genetics at the Broad Institute where I created and led the ~10 person team that developed the GATK, the dominant software for processing next-generation DNA sequencing data. I have an BA in Computer Science and Math from Northwestern, a PhD in Biochemistry from University of Cambridge where I was a Marshall fellow, and finally postdoc'd at Harvard to study antibiotic resistance evolution. My academic articles are widely published with more than 28,000 citations.


Learning with limited supervision

Stefano Ermon, Assistant Professor, Department of Computer Science, Stanford University

Date & Time: Wednesday, November 1, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Registration Fee: IEEE CIS members: free
         Students - free
         IEEE (non-CIS) members - $5 donation
         Non-members - $5 pay at the door

Abstract: Many of the recent successes of machine learning have been characterized by the availability of large quantities of labeled data. Nonetheless, we observe that humans are often able to learn with very few labeled examples or with only high level instructions for how a task should be performed. In this talk, I will present some new approaches for learning useful models in contexts where labeled training data is scarce or not available at all. I will first discuss and formally prove some limitations of existing training criteria used for learning hierarchical generative models. I will then introduce novel architectures and methods to overcome these limitations, allowing us to learn a hierarchy of interpretable features from unalebeld data. Finally, I will discuss ways to use prior knowledge (such as physics laws or simulators) to provide weak forms of supervision, showing how we can learn to solve useful tasks, including object tracking, without any labeled data.

Biography: Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano's research has won several awards, including three Best Paper Awards, a World Bank Big Data Innovation Challenge, and was selected by Scientific American as one of the 10 World Changing Ideas in 2016. He is a recipient of the Sony Faculty Innovation Award and NSF CAREER Award.


Hebbian Learning and the LMS Algorithm

Prof. Bernard Widrow, Professor of Electrical Engineering, Emeritus, Stanford University

Date & Time: Tuesday, September 26, 2017, 6:30 PM – 8:30 PM PDT

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Directions: TI-BldgE-Auditorium.pdf

Registration Link: (Mandatory) https://ieeecisscv_sept2017.eventbrite.com/

Registration Fee: IEEE members: Free
         Non-members: $5 (pay at door) You do not need to be an IEEE member to attend!

Abstract: Hebb's learning rule can be summarized as "neurons that fire together wire together." Wire together means that the weight of the synaptic connection between any two neurons is increased when both are firing. Hebb's rule is a form of unsupervised learning. Hebb introduced the concept of synaptic plasticity, and his rule is widely accepted in the field of neurobiology.

When imagining a neural network trained with this rule, a question naturally arises. What is learned with "fire together wire together," and what purpose could this rule actually have? Not having a good answer has long kept Hebbian learning from engineering applications. The issue is taken up here and possible answers will be forthcoming.

Strictly following Hebb's rule, weights could only increase, never decrease. This would eventually cause all weights to saturate, yielding a useless network. When extending Hebb's rule to make it workable, it was discovered that extended Hebbian learning could be implemented by means of the LMS algorithm. The result was the Hebbian-LMS algorithm.

The LMS (least mean square) algorithm was discovered by Widrow and Hoff in 1959, ten years after Hebb's classic book first appeared. The LMS algorithm optimizes with gradient descent. It is the most widely used learning algorithm today. It has been applied in telecommunications systems, control systems, signal processing, adaptive noise cancelling, adaptive antenna arrays, etc. It is at the foundation of the backpropagation algorithm of Paul Werbos.

Hebb's rule notwithstanding, the nature of the learning algorithm(s) that adapt and control the strength of synaptic connections in animal brains is for the most part unknown. The biochemistry of synaptic plasticity is largely understood, but the overall control algorithm is not understood. A solution to this mystery might be the Hebbian-LMS algorithm, a control process for unsupervised training of neural networks that perform clustering. Considering the structure of neurons, synapses, and neurotransmitters, the electrical and chemical signals necessary for the implementation of the Hebbian-LMS algorithm seem to be all there. Hebbian-LMS seems to be a natural algorithm. It is proving to be a simple useful algorithm that is easy to make work. Neuron to neuron connections are as simple as can be. All this raises a question. Could a brain or major portion of a brain be implemented with basic building blocks that perform clustering? Is clustering nature's fundamental neurological building block?

On the engineering side, layered neural networks trained with Hebbian-LMS have been simulated. Hidden layers are trained, unsupervised, with Hebbian-LMS while the output layer is trained with classic LMS, supervised. The hidden layers perform clustering. The output layer is fed clustered inputs, and from this makes the final classification decisions. Networks that are not layered, for example randomly connected, can be implemented with Hebbian-LMS neurons to provide inputs to an output classifier. The same training algorithm could be utilized.

The Hebbian-LMS network is a general purpose trainable classifier and gives performance comparable to a layered network trained with the backpropagation algorithm. The Hebbian-LMS network is much simpler to implement and easier to make work. It is early to predict, but it seems highly likely that Hebbian-LMS will have many engineering applications to clustering, pattern classification, signal processing, control systems, and to machine learning.

Biography: Bernard Widrow received the S.B., S.M., and Sc.D. degrees in Electrical Engineering from the Massachusetts Institute of Technology in 1951, 1953, and 1956, respectively. He joined the MIT faculty and taught there from 1956 to 1959. In 1959, he joined the faculty of Stanford University, where he is currently Professor of Electrical Engineering, Emeritus.

He began research on adaptive digital filters, learning processes, and artificial neural networks in 1957. Together with M.E.Hoff, Jr. his first doctoral student at Stanford, he invented the LMS algorithm in the autumn of 1959. Today, this is the most widely used learning algorithm, used in every MODEM in the world. He has continued working on adaptive signal processing, adaptive noise cancelling, adaptive antennas, adaptive controls, and neural networks, since that time.

Dr. Widrow is a Life Fellow of the IEEE and a Fellow of AAAS. He received the IEEE Centennial Medal in 1984, the Alexander Graham Medal in 1986, the IEEE Signal Processing Society Medal in 1986, the IEEE Neural Networks Pioneer Medal in 1991, the IEEE Millennium Medal in 2000, and the Benjamin Franklin Medal for Engineering from the Franklin Institute of Philadelphia in 2001. He was inducted into the National Academy of Engineering in 1995, and the Silicon Valley Engineering Hall of Fame in 1999.

Dr. Widrow is a past president and past member of the Governing Board of the International Neural Network Society. He is associate editor of several journals and is the author of more than 125 technical papers and 21 patents. He is co-author of the Prentice-Hall book 'Adaptive Signal Processing', the IEEE Press book 'Adaptive Inverse Control', and the Cambridge University Press book Quantization Noise.