dispatch.thorcollective.com
|
ksl
|
|
Josh Rickard, a security analyst and engineer, documented his real LLM workflows for tasks like phishing detection, SIEM alert tuning, threat hunting, and security infrastructure design using Claude, Cursor, and ChatGPT. The most useful technique he describes is role-stacking — prompting the model to adopt multiple expert perspectives simultaneously rather than a single analyst viewpoint. He also constrains outputs by specifying his actual stack (Splunk, CrowdStrike EDR, Docker, Kubernetes), which keeps suggestions grounded instead of generic. The honest admission that LLMs amplify bad thinking as readily as good thinking is worth noting. Security teams at companies like CrowdStrike and Palo Alto have been quietly integrating LLMs into their own products, but practitioner-level workflows like these remain underrepresented compared to vendor marketing.
