The Subword‐Character Multi‐Scale Transformer With Learnable Positional Encoding for Machine Translation

ABSTRACT The transformer model addresses the efficiency bottleneck caused by sequential computation in traditional recurrent neural networks (RNN) by leveraging the self‐attention mechanism to parallelize the capture of global dependencies. The subword‐level modeling units and fixed‐pattern position...

Full description

Bibliographic Details
Published in:Engineering Reports
Main Authors: Wenjing Yao, Wei Zhou
Format: Article
Language:English
Published: Wiley 2025-07-01
Subjects:
Online Access:https://doi.org/10.1002/eng2.70287