Extending the capabilities of Tiramisu
Author(s)
Ben Romdhane, Malek
DownloadFull printable version (525.8Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Saman P. Amarasinghe.
Terms of use
Metadata
Show full item recordAbstract
High performance computing requires not only writing highly efficient code, but also targeting multiple architectures (e.g. CPU, GPU, MPI). However, not only does bundling algorithm and optimization often obfuscate the code, but different architectures require different optimizations and programming tools. Tiramisu [3], an optimization framework, tries to solve this issue by separating algorithm, optimizations, and architecture details, and by targeting multiple architectures in a unified syntax. In this work, we highlight the implementation of a Julia interpreter that compiles a subset of the language to Tiramisu. We show that by adding simple Tiramisu optimization commands to Julia code, we can achieve up to 14x speedup. We also present an implementation of a CUDA backend for Tiramisu in order to target GPUs. We showcase a flexible Tiramisu CUDA API, as well as how common GPU usage patterns can be expressed in Tiramisu. We demonstrate that Tiramisu matches or outperforms the performance of the Halide GPU backend.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 69-71).
Date issued
2018Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.