The Subword‐Character Multi‐Scale Transformer With Learnable Positional Encoding for Machine Translation
ABSTRACT The transformer model addresses the efficiency bottleneck caused by sequential computation in traditional recurrent neural networks (RNN) by leveraging the self‐attention mechanism to parallelize the capture of global dependencies. The subword‐level modeling units and fixed‐pattern position...
| Published in: | Engineering Reports |
|---|---|
| Main Authors: | , |
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2025-07-01
|
| Subjects: | |
| Online Access: | https://doi.org/10.1002/eng2.70287 |
