Detecting anomalies in distributed systems through log analysis remains challenging due to the complex temporal dependencies between log events, the diverse manifestation of system states, and the intricate causal relationships across distributed components. This paper introduces a TLAN (Temporal Logical Attention Network), a novel deep learning framework that integrates temporal sequence modeling with logical dependency analysis for robust anomaly detection in distributed system logs. Our approach makes three key contributions: (1) a temporal logical attention mechanism that explicitly models both time-series patterns and logical dependencies between log events across distributed components, (2) a multi-scale feature extraction module that captures system behaviors at different temporal granularities while preserving causal relationships, and (3) an adaptive threshold strategy that dynamically adjusts detection sensitivity based on system load and component interactions. Extensive experiments on a large-scale synthetic distributed system log dataset show that TLAN outperforms existing methods by achieving a 9.4% improvement in F1-score and reducing false alarms by 15.3% while maintaining low latency in real-time detection. The framework demonstrates particular effectiveness in identifying complex anomalies that involve multiple interacting components and cascading failures. Through comprehensive empirical analysis and case studies, we validate that TLAN can effectively capture both temporal patterns and logical correlations in log sequences, making it especially suitable for modern distributed architectures. Our approach also shows strong generalization capability across different system scales and deployment scenarios, supported by thorough ablation studies and performance evaluations.
Keywords: anomaly detection; deep learning; distributed system logs; temporal logicalmodeling.