Meticulous Thought Defender: Fine-Grained Chain-of-Thought (CoT) for Detecting Prompt Injection Attacks of Large Language Models

Large language models (LLMs) have exhibited exceptional capabilities across various natural language processing tasks, however, they remain susceptible to prompt injection attacks, which pose significant security challenges. Traditional detection methods often fail to effectively identify such attac...

詳細記述

書誌詳細
出版年:IEEE Access
主要な著者: Lijuan Shi, Yajing Kang, Jie Hu, Xinchi Li, Mingchuan Yang
フォーマット: 論文
言語:英語
出版事項: IEEE 2025-01-01
主題:
オンライン・アクセス:https://ieeexplore.ieee.org/document/11053836/