Norfolk Waterside Convention Center

November 10-16, 2002
NSS/MIC 2002 Home
Short Course Program
Other Education Activites
Conference Record & TNS Submissions
Author Instructions
Tour Program
Hotel Information
Questions or suggestions about this Web site?

Short Courses

Radiation Detection and Measurement, Sunday & Monday, Nov. 10 - 11.

Triggering in Particle Physics Experiments, Sunday, Nov. 10.

Integrated Circuit Front Ends for Nuclear Pulse Processing, Monday, Nov. 11.

Nuclear Emission Imaging Detectors, Systems and Methods for Breast Cancer Evaluation, Monday, Nov. 11.

Multi-Modality Imaging Devices, Tuesday, Nov. 12.

Analytical Reconstruction Methods, Tuesday, Nov. 12.

Task Based Assessment of Image Quality, Tuesday, Nov. 12.

Statistical Methods for Image Reconstruction, Tuesday, Nov. 12.

Discounted short course fees are available for some students and postdocs. Please click here for details.

Radiation Detection and Measurement

Sunday and Monday, November 10-11, 2002
Time: 8:30 AM - 5:00 PM

Organizer: Prof. Glenn F. Knoll, University of Michigan

Instructors: Stephen Derenzo, Lawrence Berkeley National Laboratory
Eugene Haller, UC Berkeley & Lawrence Berkeley National Lab
Glenn Knoll, University of Michigan
Fabio Sauli, CERN, Geneva
Helmuth Spieler, Lawrence Berkeley National Laboratory


This 2-day course provides a short review of the basic principles that underlie the operation of the major types of instruments used in the detection and spectroscopy of charged particles, gamma rays, and other forms of ionizing radiation. Examples both of established applications and recent developments are drawn from areas including particle physics, nuclear medicine, and general radiation spectroscopy. Emphasis is on understanding the fundamental processes that govern the operation of radiation detectors, rather than on operational details that are unique to specific commercial instruments. Topics are also included on the pulse processing techniques that are needed to properly record the information provided by the detection devices. This course does not cover radiation dosimetry or health physics instrumentation. The level of presentation is best suited to those with some prior background in radiation measurements, but can also serve to introduce topics that may be outside their experience base.


1. General Properties of Detectors
2. Gas-Filled Detectors
3. Scintillation Counters
4. Semiconductor Detectors
5. Pulse Processing and Analysis
6. Summary and Intercomparison, New Detector Developments

The following will be provided as part of the course:
Class notes
Textbook - "Radiation Detection and Measurement", 3rd Edition, by G. Knoll
Lunch both days
Refreshments at the morning and afternoon breaks
Certificate of completion

Fee: $400 for IEEE Members

Return to top of page

Triggering in Particle Physics Experiments

Sunday, November 10, 2002
Time: 8:30 AM - 5:00 PM

Organizer: Peter Wilson, Fermi National Accelerator Laboratory
Instructors: Sidhara Dasu - Univ of Wisconsin (BABAR, CMS)
Levan Babukhadia - SUNY at Stoneybrook (D0)
Giovanni Punzi - INFN Pisa (CDF)


A critical component of particle physics experiment design is determining which events to store for further analysis and how to make that decision. The job of the Trigger is to quickly discard uninteresting events while efficiently culling the most interesting events in as unbiased a manner as possible. In most experiments the rate at which detector data is sampled, such as beam crossing rate for a colliding beam experiment, is much higher than the rate of physics interactions of primary interest. At the same time the volume of data from digitizing all readout channels is frequently too high to be practically read-out by a data acquisition system (DAQ) for later analysis let alone be fully reconstructed in real time. Some reduction of needed bandwidth can be achieved within the DAQ system by suppressing channels with no interesting data or other data compression methods. While sparsification can reduce the needed bandwidth by factors of 10 or 100, suppression by factors of a million are often achieved with a combination of triggering and data compression. For example, the Run II trigger systems for the CDF and D0 experiments at the Fermilab Tevatron collider each reduce an input of about 10 TBytes/sec to an output of about 20 Mbytes/sec recorded on tape. This course will discuss the design of trigger systems for particle physics ranging from cosmic ray experiments to future colliding beam experiments such as those at the Large Hadron Collider.

The course will cover overall trigger system design with particular attention to impact of beam environment and data acquisition design. It will also cover the design of trigger subsystems which do fast partial event reconstruction and pass information to more global decision hardware. This process is often referred to as generating trigger primitives. The focus will be on primitives that are common to many modern HEP experiments: charge track reconstruction, calorimeter and muon triggers. Also covered will be systems to reconstruct tracks from detached vertices which is a more recent and complicated task. Specific examples from past, current and future experiments will be used to illustrate the techniques of each topic and the progression of those techniques with improving technology. Comparisons will also be made for different types of experiments (e.g. cosmic ray, fixed target, colliding beam).

The overall system design of the trigger is very closely coupled to the structure of the beam of the experiment. The particle type, beam energy and timing structure all have a large impact on the rate of particle interactions both interesting (signal) and uninteresting (background). Beam environment can vary from neutrinos produced in the sun (no timing structure) to proton-anti-proton collisions in bunches separated by 25ns. The complexity of the detector systems also impact the trigger design: what types of events will the experiment need to detect, how many channels are there? The trigger design is also intimately related to the DAQ architecture since the DAQ must feed data to the trigger and the trigger must tell the DAQ what to do with the data. We will discuss how these issues impact the trigger system design. For example, how many decision levels are needed, which levels will be implemented with hardware, which levels with software and which as combination of hardware and software. We will show how these decisions have changed over time with improving technology (impact of Moore's Law).

The design of subsystems to generate trigger primitives must be closely connected to the design of the detector subsystems and the front-end electronics which read them out. These systems do fast reconstruction of event data with a very focussed purpose. To minimize execution time, only certain classes of objects are reconstructed (e.g. Tracks above a minimum momentum threshold). We will focus on reconstruction of physics objects in lower level parts of trigger systems. Since most Level 3 triggers are based on computing farms running off-line type reconstruction code, the design requirements are not particularly unique to the trigger application. The most frequently used trigger primitives are from calorimeters for electron, photon and jet reconstruction along with muons from muon detectors. These have provided and continue to provide standard signatures for many types of particle decays. We will cover these along with the next most common trigger type from reconstruction of charged particle tracks in tracking chambers. These charged tracks are used on their own or matched to objects found in calorimeters (e.g. electrons) or muon detectors. Recently, very powerful trigger processors have been developed to exploit the long lifetime of heavy quarks (b or c quarks) from the presence of displaced tracks or detached track vertices. These detached vertex triggers are very challenging but are already revolutionizing triggering in hadron collider experiments.


System Design for HEP Triggers - Lecturer: Peter Wilson

I. What does the Trigger do?
II. Design Constraints - DAQ Bandwidth vs Physics Rates
· Types of Experiment (eg Cosmic Ray, Fixed Target, Collider)
· Beam Structures (DC, Pulse trains etc)
· Cross-section of Physics of interest vs backgrounds
· Basic methods of event selection - particle identification and
· kinematics etc
· Need to select data of greatest interest for physics analysis
· Bandwidth, Bandwidth, Bandwidth
III. Intimate relationship with DAQ system
· DAQ readout bandwidth
· data storage and transfer architecture
· data sparsification
· Impact of Trigger on DAQ design
IV. Need for and design of multi-level triggers
· Tied directly to DAQ capabilities
· Rate reduction too large to be done in single fast processing step
V. Differences for different types of experiment:
· non-accelerator experiments
· fixed target experiment
· e+e- collider
· ep collider
· Hadron collider
VI. Impact of Trigger design on DAQ and Front-End Electronics Designs
VII. Design for testing and validation of Trigger operation
· Measurement of Trigger Efficiency
· Readout of data from Trigger decision process
· Trigger Hardware as Data Source for DAQ
VIII. Historical Perspective
· Progression of HEP accelerator luminosities
· Improvements in Experiment bandwidths

Charge Particle Track Processors for Hardware Triggering - Lecturer: Levan Babukhadia

I. Relationship of tracking algorithms to detector architecture
II. Relationship of hardware implementation to front-end and DAQ designs
III. Technological progression of Track Processors
· Fixed target to colliding beam
· With electronics capabilities
IV. Specific implementation examples/differences
· e+e- (eg Babar, Belle)
· p-pbar (CDF, D0)
· Being built (LHC)
V. What the future may bring?

Triggering on Particle Types: Calorimeter and Muon Based Triggers - Lecturer: Sidhara Dasu

I. Calorimeter based triggering
· Electrons, Photons, and Jets
· Global energy triggers
· Energy clustering algorithms
II. Triggering on Muons
· Triggering on Muon detectors alone
· Connection to charge track processors (matching)
III. Relationship of hardware implementation to front-end and DAQ designs
IV. Specific implementation examples/differences
· Fixed target
· e+e- (eg Babar, Belle)
· p-pbar (CDF, D0)
· Being built (CMS, CMS)
V. Technologies: ASICs, FPGAs,...
VI. What may the future bring?

Triggering on Tracks from Detached Vertices - Lecturer: Giovanni Punzi

I. Technological progression of vertex triggers Track Processors
II. Relationship of tracking algorithms to detector architecture
(eg strips vs pixels, forward vs Central Geometry)
III. Relationship of hardware implementation to front-end and DAQ designs
IV. Hardware Implementation differences: ASICs, FPGAs, Commercial Processors
V. Specific Implementation examples/differences
· Fixed Target Experience?
· CDF and D0 Silicon Vertex Trackers
· BTeV Detached Vertex Trigger

The following will be provided as part of the course:
Class notes
Refreshments at the morning & afternoon break
Certificate of completion

Fee: $280 for IEEE Members

Return to top of page

Integrated Circuit Front Ends for Nuclear Pulse Processing

Monday, November 11, 2002
Time: 8:30am - 5:00pm

Organizer: Chuck Britton, Oak Ridge National Laboratory

Instructors: Veljko Radeka, Brookhaven National Laboratory
Paul O'Connor, Brookhaven National Laboratory
Alan Wintenberg, Oak Ridge National Laboratory


This one-day course will cover integrated circuits developed for nuclear pulse processing applications with an emphasis on charge measurement. We will discuss bipolar and MOS transistor operation, signal processing for pulse measurements, charge-sensitive preamplifiers, photomultiplier preamplifiers, pulse-shaping circuits, sample/holds, and analog/digital converters.

This course has been targeted to three types of attendees. The first is the engineer/physicist who desires understanding of the basics of integrated circuits and pulse-shaping networks in order to begin creating circuits for systems. The second is the engineer/physicist/manager who needs to be able to understand the basics of these technologies and their achievable performance in order to manage or work with a development team utilizing these technologies. The third type is one who desires an overview for personal technical development.

The morning session will be an overview of the theory of pulse processing from a theoretical viewpoint. It will cover noise sources and pile up and their effect on resolution. Charge-sensitive preamplifiers and their design in integrated circuit processes will be covered with an emphasis on implementation.

The afternoon session will cover integrated circuits for photomultiplier tube readout and associated circuits for the system aspects such as variations of gain and timing. Analog/digital converters and their associated circuitry (sample/hold and peak stretchers) will be discussed.

In all cases, numerous examples will presented of the present state-of-the-art.

The following will be provided as part of the course:
Class notes
Textbook - "Analog Integrated Circuit Design", by David Johns and Ken Martin
Refreshments at the morning & afternoon break
Certificate of completion

Fee: $340 for IEEE Members

Return to top of page

Nuclear Emission Imaging Detectors, Systems and Methods for Breast Cancer Evaluation

Monday, November 11, 2002
Time: 1:00pm - 5:30pm

Co-Organizers: Martin Tornai, PhD
Duke University Medical Center, Department of Radiology
Duke University, Department of Biomedical Engineering

Craig Levin, PhD
UC San Diego School of Medicine, Department of Radiology
San Diego VA Medical Center, Nuclear Medicine Division

Instructors: James Bowsher, PhD, Duke University Medical Center,
Department of Radiology


In recent years, the possibility of using nuclear emission imaging to alleviate some drawbacks of conventional methods for detection, diagnosis and staging of breast cancer has been an active field of research. Dedicated nuclear cameras used in conjunction with breast cancer specific radiotracers offer the potential for more specific and sensitive identification of breast cancer than conventional imaging techniques. Drawbacks of standard clinical nuclear imaging methods for breast imaging such as planar scintigraphy, Single Photon Emission Computed Tomography (SPECT), and Positron Emission Tomography (PET) are that the all-purpose camera systems' geometry and performance are not optimized for breast cancer imaging. In addition, the relatively expensive all-purpose cameras keep study costs high compared to standard breast imaging techniques, which raises questions of cost-effectiveness. For these reasons there has been great interest in development of dedicated breast imaging systems and techniques. Through close-proximity breast imaging and new detector materials, components and configurations, such systems extend the performance limits available to nuclear imaging.

This course is designed for the scientist and engineer who wants to learn more about or review the details of issues specific to breast imaging with nuclear emission cameras. This discussion includes system development issues relevant to both single and coincident photon imaging systems. Issues relevant to both conventional clinical and dedicated breast imaging systems will be covered. The course begins with a discussion of basic detector design issues for breast imaging with nuclear emission cameras. Practical information on how to build a dedicated breast imaging system will be covered such as detector components, electronics, and event positioning algorithms. Next, we present a thorough discussion of recent systems and methods that have been developed by various researchers in the field for nuclear breast imaging. A comparison of the variety of different approaches will give the course attendee perspective on the important system issues under consideration. Data generated from phantom studies will be presented to understand the limitations of the various approaches. Practical clinical imaging applications of these systems and methods will also be presented to demonstrate the utility of these systems. The session ends with a comprehensive discussion on complete-orbit and image reconstruction issues relevant to nuclear emission breast imaging systems. The particular geometry of the breast in relation to the detector systems yields some very unique problems and solutions for tomographic image reconstruction.


Detector Design Issues for Breast Imaging with Nuclear Emission Cameras
(Craig Levin)

I. Overview of Breast Imaging with Nuclear Cameras
A. Motivation
B. General Design Principles

II. Scintillation Detector Designs of Nuclear Imagers
A. Scintillation Crystal Design
B. Collimation Schemes for Scintillation Crystal Designs
C. Photodetector Design
D. Electronic Readout of Position Sensitive Photodetectors
E. Electronic Processing and Data Acquisition for Imaging
F. Event Positioning Schemes and Image Formation
III. Semiconductor Detector Designs of Nuclear Imagers
A. Semiconductor Crystal Materials
B. Semiconductor vs. Scintillation Crystal Designs
C. Semiconductor Imaging Array Configurations
IV. Promising Detector Designs for Dedicated Nuclear Breast Imagers
A. Small FOV Gamma Ray Imagers
B. Small FOV Coincidence Imagers

Systems Approaches to Nuclear Emission Breast Imaging
(Martin P. Tornai)

I. Overview - Imaging Approaches
A. Primary Concerns of Nuclear Imaging Systems
B. Single Photon Planar Imaging
C. Single Photon Emission Computed Tomographic Imaging
D. Coincident Photon Planar Imaging
E. Coincident Photon Tomographic Imaging
F. Useful "Add-On" Features of Systems
II. Single Photon Planar Imaging (GEM, PhEM™, Scintimammography, SPEM)
A. Overview of Uncompressed and Compressed Breast Imaging
B. Clinical Scintimammography Cameras & Systems
C. Dedicated, Compact Cameras & Systems
D. Needle Biopsy Guidance with Compact Systems
III. Mammotomography with Single Photon Emission Computed Tomography (ASETT, PICO-SPECT, RSH-SPECT, SPECT)
A. Overview of Uncompressed Breast Imaging
B. Clinical SPECT Cameras & Systems
C. Compact SPECT Cameras & Systems
D. Specialized SPECT Systems
IV. Coincident Photon Planar Imaging (PEM)
A. Overview of Uncompressed and Partially-Compressed Breast Imaging
B. Dedicated, Compact Cameras & Systems
C. Needle Biopsy Guidance with Compact Systems
V. Mammotomography with Coincident Photon Tomographic Imaging (B-PET, mammoPET, maxPET, PET)
A. Overview of Uncompressed and Partially-Compressed Breast Imaging
B. Clinical PET Cameras & Systems
C. Dedicated, Compact PET Cameras & Systems

Image Reconstruction and More Nearly Complete Orbits in Mammotomography
(James E. Bowsher)

I. General Perspective on Image Reconstruction and Orbits for Mammotomography
II. Orbits for Nearly Completely Sampling the Breast Region with SPECT
A. Parallel-Hole Collimation
1. Basic resolution-sensitivity characteristics of parallel-hole collimation
2. Orlov's condition
3. Parallel-hole, horizontal axis of rotation orbits
4. Parallel-hole, vertical axis of rotation orbits
5. Slanted parallel-hole collimation
B. Pinhole Collimation
1. Basic resolution-sensitivity characteristics of pinhole collimation
2. Tuy-Smith conditions
3. Partial-circle, 2D pinhole orbits
4. 3D pinhole orbits
5. Combined pinhole/parallel-hole configurations
C. Cone Beam Collimation
1. Basic resolution-sensitivity characteristics of cone-beam collimation
2. Cone beam orbits
D. Connections to PET Mammotomography
III. Reconstruction Issues for Mammotomography
A. The value of modeling scatter, attenuation, and detector geometry within iterative reconstruction for SPECT and PET
B. Iterative, statistical reconstruction of multi-modality acquisitions using highly informative models of cross-modality a priori information

The following will be provided as part of the course:
Class notes
Refreshments at the afternoon break
Certificate of completion

Fee: $175 for IEEE Members

Return to top of page

Multi-Modality Imaging Devices

Tuesday, November 12, 2002
Time: 8:00am - 12:30pm.

Organizer: David W Townsend, Ph.D.
Department of Radiology
University of Pittsburgh


Bruce Hasegawa, Ph.D.
Professor, Physics Research Laboratory
University of California at San Francisco

Simon R Cherry, Ph.D.
Professor, Department of Biomedical Engineering
University of California at Davis


The importance of aligning image sets from two different modalities in regions of the body other than the brain has long been recognized, particularly where the modalities represent complementary aspects of disease. Functional imaging modalities such as PET and SPECT offer little anatomical localization, whereas anatomical imaging modalities such as CT or MR generally contain very little functional information. However, the imaging of function, accurately localized within an anatomical framework, could offer a powerful approach to the diagnosis and staging of disease, and the monitoring of treatment. Despite increasing sophistication, software fusion techniques cannot compete outside the brain with the convenience and accuracy of a hardware approach where the imaging technologies themselves are fused, rather than the images registered post hoc. This course will review the motivation for combined functional and anatomical imaging, particularly emphasizing the areas in which the software approach can be problematic. The recent development of combined SPECT/CT and PET/CT designs for imaging patients, and SPECT/CT, PET/CT and PET/MR designs for imaging small animals, will be presented, summarizing the unique challenges created by the differing scale of the animal and human instrumentation. The use of the anatomical µ-map to correct the functional data for photon attenuation is also a key feature of these devices. Finally, the clinical impact of the new systems will be assessed and illustrated with patient studies in oncology and cardiology.

The following will be provided as part of the course:
Class notes
Refreshments at the morning break
Certificate of completion

Fee: $175 for IEEE Members

Return to top of page

Analytical Reconstruction Methods

Tuesday, November 12, 2002
Time: 8:00 a.m.-12:30 p.m.

Organizer: Michel Defrise, Department of Nuclear Medicine
Vrije Universiteit Brussel, Brussels, Belgium.

Instructors: Pierre Grangeat, LETI, CEA-DTA, Grenoble, France.
Frédéric Noo, Department of Radiology, University of Utah.


Analytic reconstruction methods describe the unknown image and the data as continuous functions, and model the data acquisition by a transform operator mapping the image onto the data. In SPECT, PET and CT, this operator is the Radon or x-ray transform in two and three dimensions. Explicit inversion formulae for these operators are discretized to obtain algorithms which take into account the sampling of the data and of the image. Beside providing a unique insight into issues such as sampling, stability and data sufficiency, analytic algorithms are the methods of choice whenever the data set and the image matrix are too large to apply iterative reconstruction techniques. Despite an already long history, the academic and industrial research on analytic methods is still extremely active and has recently produced remarkable solutions to problems which had been open for many years.

The course will provide an overview of the reconstruction methods which are currently used in clinical scanners, as well as of the most recent advances in this field. It will be assumed that the attendees have some prior knowledge of the basic principles of 2D tomography (notes will be provided beforehand), but these principles will nevertheless be carefully summarized. The course will then concentrate on more advanced topics, especially 3D reconstruction in PET and spiral CT, and dynamic (4D) reconstruction.


I.  Image reconstruction from parallel-projections (M. Defrise)

2D and 3D x-ray transform, relevance for PET and SPECT, Fourier properties (central section theorem and direct Fourier reconstruction), data sufficiency condition (Orlov), Filtered-backprojection (FBP), truncated 3D data and the Reprojection algorithm.

Rebinning algorithms for 3D PET. Single-slice rebinning, approximate and exact Fourier rebinning, hybrid algorithms and the statistical distribution of rebinned data.

II.  Image reconstruction from divergent projections (F. Noo)

Fan-beam FBP, Grangeat's theorem, data sufficiency (Tuy), the RADON algorithm, 3D cone-beam FBP for non-truncated cone-beam projections, redundancy and the reduction to the ramp filter, Circular acquisition (approximate Feldkamp's methods).

Exact methods for helical cone-beam CT: PI-lines, Tam's window, Grangeat's formula for truncated projections, long-object problem. Cone-beam FBP algorithm of Katsevich and implementation details.

Rebinning methods for helical cone-beam CT: advanced single-slice rebinning and related methods (AMPR,..).

III.  Reconstruction of dynamic tomographic data (P. Grangeat)

Temporal filtering of image sequences, sliding window principle, affine motion compensation, gated tomography, reconstruction of periodic motion, voxel specific motion compensation.

Applications to CT fluoroscopy, cardiac tomography and image image reconstruction for radiotherapy.

The following will be provided as part of the course:
Class notes
Refreshments at the morning break
Certificate of completion

Fee: $175 for IEEE Members

Return to top of page

Task Based Assessment of Image Quality

Tuesday, November 12, 2002
Time: 1:00 p.m.-5:30 p.m.

Organizer: Michael King, University of Massachusetts Medical School

Instructors: Charles Metz, University of Chicago
Harrison Barrett, University of Arizona


Medical images are acquired for the purpose of diagnosis, delineation of disease state, and monitoring therapy. Thus the relative merits of different imaging, acquisition, reconstruction, and processing strategies would be best determined from objective comparisons of the imaging systems, protocols, and images at performing tasks closely related to the clinical ones for which imaging is to be performed. The purpose of this short course is to introduce the participant to the objective assessment of image quality and give them the needed information to start conducting studies of task performance using human and numerical observers.

This course will be divided into two sessions with a combined discussion, and question and answer session following the second.

The first session will cover the conduction of lesion detection studies using human observers. It will start with the underlying model of ROC studies. This will be followed by discussion of the design, conduction, and analysis of ROC studies. Specific topics will include the collection of data, definition of "truth", avoidance of bias in study design, curve fitting, comparison criteria, and statistical testing. The session will conclude with discussion of alternative observer testing methodologies such as LROC, FROC and alternative forced choice.

The second session will deal with classification and estimation tasks as assessed by numerical observers. It will start with a review of statistical decision theory and the statistical properties of medical images. With this background, optimum strategies for performing the tasks (ideal observers) will be formulated, and computational difficulties in actually implementing the optimal strategy will be identified. Various suboptimal strategies will be presented, including models that incorporate limitations of the human visual system. Some examples of applications will be presented.

The following will be provided as part of the course:
Class notes
Refreshments at the afternoon break
Certificate of completion

Fee: $175 for IEEE Members

Return to top of page

Statistical Methods for Image Reconstruction

Tuesday, November 12, 2002
Time: 1:00 p.m.-5:30 p.m.

Organizer: Jeffrey A. Fessler, Associate Professor
Department of Electrical Engineering and Computer Science
Department of Biomedical Imaging
Nuclear Medicine Division of Department of Radiology
University of Michigan


The recent commercial introduction of iterative algorithms for tomographic image reconstruction, and the increasing interest in scanners with nonstandard imaging geometries, has brought new relevance and timeliness to the topic of statistical methods for image reconstruction. This course will provide an orderly overview of the potpourri of statistical reconstruction methods that have been proposed recently. Rather than advocating any particular method, this course will emphasize the fundamental issues that one must consider when choosing between different reconstruction approaches. The intended audience is anyone who would like to reconstruct "better" images from photon-limited measurements, and who wants to make informed choices between the various methods. Recent advances in convergent forms of "ordered subsets" algorithms will be given particular attention, since these algorithms can be both practical for routine use, while also having desirable theoretical properties. Both emission tomography and transmission tomography algorithms will be discussed.

Background of Participants:
Attendees should be familiar with photon-counting imaging systems at the level presented in the Medical Imaging short course offered in previous years. Some past attendees have commented that at least a little experience with some type of iterative reconstruction (e.g. ART or OS-EM) would be helpful for getting the most value from this course.



All attendees who advance register for this course (and who supply a valid
email address with their registration information) will be sent by email a link and a password for downloading the annotated lecture notes for this short course about 2 weeks before the meeting. These advance registrants can then choose whether to print those notes in advance (about 75 4-up pages) so as to have hard copy during the course for taking notes, or to download the PDF file of the notes into a laptop, etc. On-site course registrants will be given the access information during the course and may download and print the course notes after the meeting. Hard copies of the notes will NOT be available at the course, so advance registration is recommended.



A. Introduction
The Poisson statistical model
Mathematical statement of the reconstruction problem
B. The Statistical Framework
Image parameterization
System physical modeling
line / strip integrals
detector response etc.
projector/backprojector cautions
Statistical modeling of measurements
Gaussian (data-weighted least squares)
Reweighted least squares
Deviations, e.g. deadtime
Shifted Poisson (precorrected random coincidences)
Emission vs Transmission scans
Objective functions
Contrast with "algebraic" methods
Bayesian estimation: Maximum a posteriori (MAP) methods
Data-fit terms
nonconvex, entropy, ...
Object constraints
C. Iterative algorithms for statistical image reconstruction
EM based
Direct optimization
(Coordinate Descent, Conjugate Gradient, Surrogate Functions)
simultaneous vs sequential
convergence rate
global convergence
Optimization transfer / surrogate functions
D. Additional topics
Ordered subsets / block iterative algorithms
acceleration properties interpreted geometrically
convergence issues
Spatial resolution properties / modified penalty functions
Noise properties
Performance in detection tasks relative to FBP
Applications to real PET and SPECT data
(and associated practical issues)
Model mismatch
Precorrected data
Comparisons to FBP
Pseudo-3D PET reconstruction from Fourier rebinned data

Biographical Sketch:
Jeff Fessler earned a Ph.D. in electrical engineering in 1990 from Stanford University. He has since worked at the University of Michigan, first as a DoE Alexander Hollaender post-doctoral fellow and then as an Assistant Professor in the Division of Nuclear Medicine. Since 1995 he has been with the EECS Department, where he is an Associate Professor.

The following will be provided as part of the course:
Class notes & bibliography
Refreshments at the afternoon break
Certificate of completion

Fee: $175 for IEEE Members

Return to top of page