Notion AI Agents exposed to prompt injection risks, hidden PDFs may induce the leakage of private data.

robot
Abstract generation in progress

Odaily News Notion has just released AI Agents that have a risk of prompt injection: attackers can embed hidden text (such as white font) in files like PDFs that are not visible to the naked eye. When users submit these files to the Agent for processing, the Agent may read the hidden prompts and execute instructions, potentially sending sensitive information to external addresses. Researchers point out that such attacks often utilize social engineering techniques like impersonating authority, creating urgency, and offering false security assurances to increase success rates. Experts recommend heightened vigilance: avoid uploading PDFs/files of unknown origin to the Agent, strictly limit the Agent's access to external networks and data export permissions, perform de-steganography/cleaning and manual review on suspicious files, and require the Agent to pop up a clear confirmation prompt before making external submissions to reduce the risk of sensitive data leakage.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)