Skip to content

Log Details


Click on a log in the log list to open the details page for that log, where you can view detailed information such as the time the log was generated, the host, source, service, content, extended fields, and context.

View Full Log

When logs are reported to TrueWatch, if a single log exceeds 1M, the system will split it into multiple logs according to the 1M standard. For example, a 2.5M log will be split into three logs: 1M, 1M, and 0.5M. The completeness of the split logs can be checked using the following fields:

Field
Type Description
__truncated_id string Represents the unique identifier of the log. Logs split into multiple logs use the same __truncated_id, with the ID prefix LT_xxx.
__truncated_count number Represents the total number of logs split.
__truncated_number number Represents the order in which the log was split, starting from 0, where 0 indicates the first log.

On the log details page, if the current log is split into multiple logs, a View Full Log button will appear in the upper right corner. Clicking this button will open a new page listing all related logs in the order they were split. The page will also highlight the log selected before the jump with a color to quickly locate upstream and downstream logs.

Obsy AI Error Analysis

TrueWatch provides the ability to parse error logs with one click. It uses a large model to automatically extract key information from the logs, combined with an online search engine and an operations knowledge base, to quickly analyze possible causes of failures and provide initial solutions.

  1. Filter out all logs with the status error;
  2. Click on a single data entry to expand the details page;
  3. Click on Obsy AI Error Analysis in the upper right corner;
  4. Start the anomaly analysis.

Error Details

If the current log contains error_stack or error_message field information, the system will provide you with error details related to that log.

For more log error information, please visit Log Error Tracing.

Attribute Fields

Click on attribute fields for quick filtering and viewing, and you can view host, process, trace, and container data related to the log.

Field Description
Filter Field Value Add this field to the log explorer to view all log data related to this field.
Invert Filter Field Value Add this field to the log explorer to view all log data except for this field.
Add to Display Column Add this field to the explorer list for viewing.
Copy Copy this field to the clipboard.
View Related Containers View all containers related to this host.
View Related Processes View all processes related to this host.
View Related Traces View all traces related to this host.
View Related Inspections View all inspection data related to this host.

Log Content

  • The log content automatically switches between JSON and text viewing modes based on the message type. If the message field does not exist in the log, the log content section will not be displayed. The log content supports expand and collapse, and is expanded by default. When collapsed, only one line of height is displayed.

  • For logs with source:bpf_net_l4_log, both JSON and packet viewing modes are automatically provided. The packet mode displays client, server, time, and other information, and supports switching between absolute and relative time display, with absolute time as the default. The configuration after switching is saved in the local browser.

JSON Search

In JSON-formatted logs, both key and value can be searched. After clicking, the explorer search bar will add the format @key:value for searching.

For multi-level JSON data, use . to represent hierarchical relationships. For example, @key1.key2:value means searching for the value corresponding to key2 under key1.

For more details, please refer to JSON Search.

Extended Fields

  • In the search bar, you can enter field names or values for quick search and positioning;

  • After checking the field alias, you can view it after the field name;

  • Hover over an extended field, click the dropdown icon, and you can perform the following operations on that field:

    • Filter Field Value
    • Invert Filter Field Value
    • Add to Display Column
    • Perform Dimensional Analysis: Click to jump to analysis mode > Time Series Chart
    • Copy
Note

If you choose to add a field to the display column, an icon will appear in the list for easy identification.

Context Logs

The context query function of the log service helps you trace related records before and after the occurrence of an abnormal log through a timeline, quickly locating the root cause of the problem.

  • On the log details page, you can directly view the context logs of the current data content;
  • You can sort the data by clicking the "Time" list;
  • The left dropdown box allows you to select an index to filter out the corresponding data;
  • Click the button to jump to the context log new page.
Additional Logic Explanation

According to the returned data, 50 pieces of data are loaded each time you scroll.

How is the returned data queried?

Prerequisite: Does the log have the log_read_lines field? If it exists, follow logic a; if it does not exist, follow logic b.

a. Get the log_read_lines value of the current log and filter with log_read_lines >= {{log_read_lines.value-30}} and log_read_lines <= {{log_read_lines.value +30}}

DQL Example: Current log line number = 1354170

Then:

L::RE(`.*`):(`message`) { `index` = 'default' and `host` = "ip-172-31-204-89.cn-northwest-1" AND `source` = "kodo-log" AND `service` = "kodo-inner" AND `filename` = "0.log" and `log_read_lines` >= 1354140 and `log_read_lines` <= 1354200}  sorder by log_read_lines

b. Get the current log time, and derive the start and end times by moving forward/backward.

  • Start Time: Move 5 minutes backward from the current log time;

  • End Time: Take the time of the 50th log after moving forward from the current log time (·). If time = current log time, then use time+1 microsecond as the end time; if time ≠ current log time, then use time as the end time.

Context Log Details Page

Click to jump to the details page. You can manage the current data through the following operations:

  • Enter text in the search box to search and locate data;
  • Click the button on the side to switch the system's default auto-wrap to content overflow mode, where each log is displayed as one line, and you can scroll left and right to view as needed.

Correlation Analysis

The system supports correlation analysis of log data. In addition to error details, extended fields, and context logs, you can also get a one-stop understanding of the hosts, containers, and networks corresponding to the logs.

Built-in Pages

For built-in pages such as Host, Container, and Pod, you can perform the following operations:

(Taking the "Host" built-in page as an example)

  • Edit the current page display fields, and the system will automatically match the corresponding data based on the fields;
  • Choose to jump to the metrics view or host details page;
  • Filter the time range.
Note

Only workspace administrators can modify the display fields of built-in pages. It is recommended to configure common fields. If the page is shared by multiple explorers, the field modifications will take effect in real time.

For example: If you configure the "index" field here, the log will display normally if it contains this field. However, if the trace explorer lacks this field, the corresponding value will not be displayed.

Built-in Views

In addition to the default views displayed by the system here, you can also bind user views.

  1. Enter the built-in view binding page;
  2. View the default associated fields. You can choose to keep or delete fields, and you can also add new key:value fields;
  3. Select the view;
  4. After completing the binding, you can view the bound built-in views in the host object details. You can click the jump button to go to the corresponding built-in view page.