Localized dimension growth in random network coding: A convolutional approach

We propose an efficient Adaptive Random Convolutional Network Coding (ARCNC) algorithm to address the issue of field size in random network coding. ARCNC operates as a convolutional code, with the coefficients of local encoding kernels chosen randomly over a small finite field. The lengths of local...

Full description

Bibliographic Details
Main Authors: Guo, Wangmei (Author), Cai, Ning (Author), Shi, Xiaomeng (Contributor), Medard, Muriel (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor), Massachusetts Institute of Technology. Research Laboratory of Electronics (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2012-10-09T15:14:43Z.
Subjects:
Online Access:Get fulltext
LEADER 02424 am a22002773u 4500
001 73679
042 |a dc 
100 1 0 |a Guo, Wangmei  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Research Laboratory of Electronics  |e contributor 
100 1 0 |a Shi, Xiaomeng  |e contributor 
100 1 0 |a Medard, Muriel  |e contributor 
700 1 0 |a Cai, Ning  |e author 
700 1 0 |a Shi, Xiaomeng  |e author 
700 1 0 |a Medard, Muriel  |e author 
245 0 0 |a Localized dimension growth in random network coding: A convolutional approach 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2012-10-09T15:14:43Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/73679 
520 |a We propose an efficient Adaptive Random Convolutional Network Coding (ARCNC) algorithm to address the issue of field size in random network coding. ARCNC operates as a convolutional code, with the coefficients of local encoding kernels chosen randomly over a small finite field. The lengths of local encoding kernels increase with time until the global encoding kernel matrices at related sink nodes all have full rank. Instead of estimating the necessary field size a priori, ARCNC operates in a small finite field. It adapts to unknown network topologies without prior knowledge, by locally incrementing the dimensionality of the convolutional code. Because convolutional codes of different constraint lengths can coexist in different portions of the network, reductions in decoding delay and memory overheads can be achieved with ARCNC.We show through analysis that this method performs no worse than random linear network codes in general networks, and can provide significant gains in terms of average decoding delay in combination networks. 
520 |a National Natural Science Foundation (China) (China Scholarship Council) (Grant 60832001) 
520 |a Georgia Institute of Technology (Subcontract RA306-S1) 
520 |a Massachusetts Institute of Technology. Research Laboratory of Electronics (Claude E. Shannon Research Assistantship) 
520 |a Natural Sciences and Engineering Research Council of Canada (NSERC) (Postgraduate Scholarship) 
546 |a en_US 
655 7 |a Article 
773 |t Proceedings on the IEEE International Symposium on Information Theory Proceedings (ISIT), 2011