Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers

J Xie, P Cheng, X Liang, Y Dai, N Du - arXiv preprint arXiv:2308.13191, 2023 - arxiv.org
J Xie, P Cheng, X Liang, Y Dai, N Du
arXiv preprint arXiv:2308.13191, 2023arxiv.org
Although dominant in natural language processing, transformer-based models remain
challenged by the task of long-sequence processing, because the computational cost of self-
attention operations in transformers swells quadratically with the input sequence length. To
alleviate the complexity of long-sequence processing, we propose a simple framework to
enable the offthe-shelf pre-trained transformers to process much longer sequences, while
the computation and memory costs remain growing linearly with the input sequence lengths …
Although dominant in natural language processing, transformer-based models remain challenged by the task of long-sequence processing, because the computational cost of self-attention operations in transformers swells quadratically with the input sequence length. To alleviate the complexity of long-sequence processing, we propose a simple framework to enable the offthe-shelf pre-trained transformers to process much longer sequences, while the computation and memory costs remain growing linearly with the input sequence lengths. More specifically, our method divides each long-sequence input into a batch of chunks, then aligns the interchunk information during the encoding steps, and finally selects the most representative hidden states from the encoder for the decoding process. To extract inter-chunk semantic information, we align the start and end token embeddings among chunks in each encoding transformer block. To learn an effective hidden selection policy, we design a dual updating scheme inspired by reinforcement learning, which regards the decoders of transformers as environments, and the downstream performance metrics as the rewards to evaluate the hidden selection actions. Our empirical results on real-world long-text summarization and reading comprehension tasks demonstrate effective improvements compared to prior longsequence processing baselines.
arxiv.org