Skip to content

Logs


In modern infrastructure, thousands of log events can be generated every minute. These logs follow specific formats, usually containing timestamps, and are generated by servers, outputting to different files such as system logs, application logs, and security logs. However, logs are scattered across various servers, and once a system failure occurs, it is necessary to log in to each server to check the logs to determine the cause of the failure, which significantly increases the complexity of troubleshooting.

Faced with such a large amount of data, it is necessary to decide which logs should be sent to the log management solution and which should be archived. If logs are filtered before sending, critical information may be missed or valuable data may be accidentally deleted.

To improve the efficiency of fault diagnosis, comprehensively grasp the system status, and avoid being passive in emergency situations, it becomes crucial to achieve centralized log management and provide centralized retrieval and correlation analysis functions.

Through powerful log collection capabilities, log data is uniformly reported to the workspace, and subsequent operations such as centralized storage, auditing, monitoring, alerting, analysis, and export of collected logs are performed, simplifying the log management process. This approach avoids the problems that may arise from filtering logs before sending, ensuring that all critical information can be properly processed and analyzed.

Features

  • Query and Analysis


    Automatically identify log status, quickly filter and correlate logs, aggregate similar texts, help quickly discover and analyze anomalies, and accelerate fault resolution

  • Pipelines


    Split the text content of logs, convert them into structured data, including extracting timestamps, statuses, and specific fields as tags

  • Generate Metrics


    Generate new metric data based on existing data in the current workspace, thus designing and implementing new technical indicators as needed

  • Log Index


    Filter log data that meets the conditions and archive them in different indexes, and select data storage strategies for log indexes

  • Log Blacklist


    Customize log collection filtering rules, log data that meets the conditions will no longer be reported to TrueWatch, helping to save log data storage costs

  • Data Forwarding


    Save logs, traces, and RUM data to TrueWatch's object storage or forward to external storage, flexibly manage data forwarding data

  • Data Access


    By setting role access permissions and data desensitization rules, you can more finely control access to log data while properly handling sensitive information