|
8 | 8 |
|
9 | 9 | **yc-360 script** is a simple script that captures 16 different artifacts from your application in a [pristine manner](https://docs.ycrash.io/yc-360/features/pristine-capture.html), which are highly useful to troubleshoot production problems. Here is the list of artifacts captured by the script: |
10 | 10 |
|
11 | | -| **Artifact** | **What It Captures** | |
12 | | -|-----------------------|--------------------------------------------------------------------------------------| |
13 | | -| Application Log | Logs generated by your application—useful for identifying functional failures | |
14 | | -| GC Log | Garbage collection activity—helps detect memory overuse, frequent GCs, pauses | |
15 | | -| Thread Dump | Snapshot of all threads in the JVM—key to spotting deadlocks, BLOCKED/stuck threads | |
16 | | -| Heap Dump | Memory snapshot of JVM objects—used to identify memory leaks or heavy objects | |
17 | | -| Heap Substitute | Lightweight version of heap dump when full heap dump isn’t available | |
18 | | -| `top` | Overall CPU/memory usage of system processes | |
19 | | -| `ps` | Snapshot of currently running processes | |
20 | | -| `top -H` | Thread-level CPU usage—helps isolate CPU-intensive threads | |
21 | | -| Disk Usage (`df -h`) | Available/used disk space—useful when app errors stem from full disks | |
22 | | -| `dmesg` | Kernel logs—catches low-level system issues like hardware errors or OOM kills | |
23 | | -| `netstat` | Network connections, open ports, and listening sockets | |
24 | | -| `ping` | Network latency to external or internal endpoints | |
25 | | -| `vmstat` | Virtual memory, I/O, and CPU scheduling stats | |
26 | | -| `iostat` | Disk I/O performance metrics | |
27 | | -| Kernel Parameters | System tuning configurations (like swappiness, max open files) | |
28 | | -| Extended Data | Any custom scripts or data you configure yc-360 script to collect | |
29 | | -| Metadata | Basic system/app metadata (JVM version, hostname, uptime, etc.) | |
| 11 | +| **Artifact** | **Why It’s Important** | |
| 12 | +| ---------------------- | ------------------------| |
| 13 | +| **Application Log** | The primary troubleshooting artifact. It helps you identify exceptions, timeouts, performance degradations, and business logic errors that directly explain application malfunctions. | |
| 14 | +| **GC Log** | Records garbage collection activity and helps you identify frequent collections, long pauses, and memory overuse that degrade application throughput and latency. | |
| 15 | +| **Thread Dump** | Captures a snapshot of all JVM threads and helps you identify deadlocks, BLOCKED threads, threads stuck on external calls, or runaway CPU-consuming threads. | |
| 16 | +| **Heap Dump** | Provides a complete memory snapshot of the JVM and helps you identify memory leaks, objects retaining excessive references, and inefficient data structures. | |
| 17 | +| **Heap Substitute** | Offers a lightweight version of heap details when full heap dumps are restricted or too large. Helps you understand object growth and memory allocation patterns. | |
| 18 | +| **top** | Displays system-wide CPU and memory utilization and helps you identify whether system resources are saturated. It also reveals if other processes are consuming CPU or memory and whether CPU cycles are stolen by the kernel or hypervisor. | |
| 19 | +| **ps** | Lists all running processes and helps you identify zombie, orphaned, or unexpected processes that consume resources or interfere with application performance. | |
| 20 | +| **top -H** | Shows CPU usage at the thread level and helps you pinpoint which thread within the JVM or another process is consuming excessive CPU time. | |
| 21 | +| **Disk Usage (df -h)** | Displays available and used disk space and helps you identify low-disk or full-disk conditions that can lead to logging failures, crashes, or corrupted writes.| |
| 22 | +| **dmesg** | Shows kernel-level system messages and helps you identify hardware issues, driver failures, Out-of-Memory (OOM) kills, and kernel panics affecting system stability. | |
| 23 | +| **netstat** | Lists active network connections, open ports, and listening sockets and helps you identify stuck connections, port conflicts, or network floods. | |
| 24 | +| **ping** | Measures network reachability and latency to remote endpoints and helps you identify network disconnections, packet loss, or routing delays. | |
| 25 | +| **vmstat** | Reports virtual memory, I/O, and CPU scheduling metrics and helps you identify paging, swapping, CPU run queue build-up, blocked processes, and I/O waits. | |
| 26 | +| **iostat** | Displays disk I/O throughput, utilization, and latency statistics and helps you identify slow or overloaded disks that contribute to high response times or bottlenecks.| |
| 27 | +| **Kernel Parameters** | Lists important OS tuning parameters and helps you identify misconfigurations such as low file descriptor limits, restricted socket buffers, or suboptimal memory settings. | |
| 28 | +| **Extended Data** | Captures any additional scripts or diagnostic data you configure. Helps you include application-specific, framework-specific, or environment-specific metrics for deeper analysis. | |
| 29 | +| **Metadata** | Collects system and JVM details such as hostname, OS version, uptime, and JVM version, helping you correlate incidents across environments or reproduce problems accurately. | |
| 30 | + |
30 | 31 |
|
31 | 32 |
|
32 | 33 | ## Why you need yc-360 Script? |
33 | 34 |
|
34 | | -**1. Capture Deeper 360° Artifacts for Root Cause Analysis:** Monitoring tools like APMs are excellent at reporting problems such as memory spikes, high CPU usage, or degraded response times. However, when it's time to get to the root cause, you need more than charts and alerts. For example, if memory consumption spikes, you will need a heap dump to identify which objects are leaking. If CPU usage increases, a thread dump is required to trace it back to the exact lines of code causing the spike. The yc-360 script complements your APM by collecting these deeper diagnostic artifacts in a single run—making it much easier and faster to isolate the problem. |
| 35 | +**1. Capture Deeper 360° Artifacts for Root Cause Analysis:** Monitoring tools like APMs are excellent at reporting problems such as memory spikes, high CPU usage, or degraded response times. However, when it's time to get to the root cause, you need more than charts and alerts. For example, if memory consumption spikes, you will need a heap dump to identify which objects are leaking. If CPU usage increases, a thread dump is required to trace it back to the exact lines of code causing the spike. The yc-360 script complements your APM by collecting these [deeper diagnostic artifacts](https://www.youtube.com/watch?v=SjqMp8yE9sE) in a single run—making it much easier and faster to isolate the problem. |
35 | 36 |
|
36 | 37 | **2. Accelerate Troubleshooting in Customer On-Prem Environments:** In many cases, our application runs on a customer’s infrastructure where we don’t have shell access or real-time visibility. Asking them to send screenshots or partial logs [rarely provides enough context to troubleshoot effectively](https://blog.ycrash.io/key-challenges-in-troubleshooting-applications-at-customer-premise/). The yc-360 script solves this by giving you a simple script that the customer can run themselves. It gathers all the essential artifacts across Application, JVM, system, and network layers - so you get everything you need to troubleshoot the issue thoroughly, even without direct access. |
37 | 38 |
|
|
0 commit comments