Efficient algorithms and hardware for Natural Language Processing
Author(s)
Wang, Hanrui,S.M.Massachusetts Institute of Technology.
Download1192966271-MIT.pdf (5.064Mb)
Alternative title
Efficient algorithms and hardware for NLP
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Song Han.
Terms of use
Metadata
Show full item recordAbstract
Natural Language Processing (NLP) is essential for many real-world applications, such as machine translation and chatbots. Recently, NLP is witnessing rapid progresses driven by Transformer models with the attention mechanism. Though enjoying the high performance, Transformers are challenging to deploy due to the intensive computation. In this thesis, we present an algorithm-hardware co-design approach to enable efficient Transformer inference. On the algorithm side, we propose Hardware- Aware Transformer (HAT) framework to leverage Neural Architecture Search (NAS) to search for a specialized low-latency Transformer model for each hardware. We construct a large design space with the novel arbitrary encoder-decoder attention and heterogeneous layers. Then a SuperTransformer that covers all candidates in the design space is trained and efficiently produces many SubTransformers with weight sharing. We perform an evolutionary search with a hardware latency constraint to find a Sub- Transformer model for target hardware. On the hardware side, since general-purpose platforms are inefficient when performing the attention layers, we further design an accelerator named SpAtten for efficient attention inference. SpAtten introduces a novel token pruning technique to reduce the total memory access and computation. The pruned tokens are selected on-the-fly based on their importance to the sentence, making it fundamentally different from the weight pruning. Therefore, we design a high-parallelism top-k engine to perform the token selection efficiently. SpAtten also supports dynamic low-precision to allow different bitwidths across layers according to the attention probability distribution. Measured on Raspberry Pi, HAT can achieve 3X speedup, 3.7X smaller model size with 12,041X less search cost over baselines. For attention layer inference, SpAtten reduces DRAM access by 10.4X and achieves 193X, 6218X speedup, and 702X, 1244X energy savings over TITAN Xp GPU and Raspberry Pi ARM CPU..
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 71-81).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.