Plenary Speakers

Evolution of The GPU Device Widely Used in AI and Massive Parallel Processing

Toru Baji
NVIDIA (Japan)

Abstract: While the CPU performance cannot benefits anymore from Moore's law, GPU (Graphic Processing Unit) still continue to increase its performance 1.5times/year. From this reason, GPU is now widely used not only for computer graphics but also for massive parallel processing and AI (Artificial Intelligence). In this paper, the details of this continuous performance growth, the constant evolution in transistors count and die size, and the scalable GPU architecture will be described.
(Keywords: GPU, Die Size, Transistor Count, Moore's Law, AI, GPU Computing, SoC)

Biography: He received his M.S. degrees from Osaka University in 1977, and joined Hitachi Central Research Laboratory. There he made research and development of Solid-state Image Sensors and Processor Architectures. He also conducted the research of Analog-Digital CMOS circuits and DSP architecture in Univ. of California Berkley and Hitachi America R&D. He moved to Semiconductor Div., Hitachi Ltd. in 1993 and then to Renesas. There he served as a department manager of the SH-DSP department and then automotive application department.
In 2008 he joined NVIDIA as a Senior Solution Architect for automotive and Tegra SoC business. From 2016 he is a technical adviser and GPU evangelist in NVIDIA.

Close