Summary: | Artificial intelligence (AI) has a limitation in that it is only in the passive cognition area, so its operating process is not transparent; therefore, the technology relies on learning data. Since raw data for AI learning are processed and inspected manually to assure high quality for sophisticated AI learning, human errors are inevitable, and damaged and incomplete data and differences from the original data may lead to unexpected outputs of AI learning for which processed data are used. In this context, this research examines cases where AI learning data were inaccurate, in terms of cybersecurity, and the need for learning data management before machine learning through analysis of cybersecurity attack techniques, and we propose the direction of establishing a data-preserving AI system, which is a blockchain-based learning data environment model to verify the integrity of learning data. The data-preserving AI learning environment model is expected to prevent cyberattacks and data deterioration that may occur when data are provided and utilized in an open network for the processing and collection of raw data.
|