PReFormer: A memory-efficient transformer for point cloud semantic segmentation

The success of transformer networks in the natural language processing and 2D vision domains has encouraged the adaptation of transformers to 3D computer vision tasks. However, most of the existing approaches employ standard backpropagation (SBP). SBP requires the storage of model activations on a f...

Full description

Bibliographic Details
Published in:International Journal of Applied Earth Observations and Geoinformation
Main Authors: Perpetual Hope Akwensi, Ruisheng Wang, Bo Guo
Format: Article
Language:English
Published: Elsevier 2024-04-01
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1569843224000840