Back to projects
SecureLogger
Cyber DeceptionGAN-powered adversarial log generation system that synthesizes realistic server access logs to obfuscate attacker behavioral fingerprints in honeypot environments.
PyTorchGANFlaskPython
Project highlights
- GAN-based synthetic log generation trained from real traffic signatures.
- Configurable fixed-count or ratio-based noise injection strategies.
- Manual and real-time watcher modes for flexible deployment across environments.
- Log privacy controls including redaction-friendly flow for sensitive fields.
What it is
SecureLogger is a cyber deception framework that trains a GAN on real URL-path behavior and injects realistic synthetic access-log entries to camouflage true traffic patterns.
Problem it solves
Raw web access logs can reveal user behavior, system usage peaks, and high-value endpoints to adversaries. SecureLogger obscures those signals by continuously mixing plausible fake entries with real traffic traces.
How it works
- Train an LSTM-based GAN on URL-path datasets so the generator learns realistic endpoint structures and sequence patterns.
- Persist trained generator weights and use them to produce novel but plausible fake request paths.
- Inject synthetic entries using manual and watcher-driven modes so deception can run as batch or continuous flood.
- Blend fake and real events at configurable noise levels to obfuscate traffic interpretation while keeping logs operational.
- Run in a testable local setup with a sample web app and flooder loop for fast experimentation.
Key capabilities
- GAN-based synthetic log generation trained from real traffic signatures.
- Configurable fixed-count or ratio-based noise injection strategies.
- Manual and real-time watcher modes for flexible deployment across environments.
- Log privacy controls including redaction-friendly flow for sensitive fields.
- Operationally simple scripts for rapid proof-of-concept deployment.
Impact and outcomes
- Raises attacker uncertainty when profiling services through stolen or monitored logs.
- Strengthens deception posture in honeypot and defensive-lab scenarios.
- Demonstrates practical adversarial ML use beyond classification into defensive obfuscation.