Network Impairments are read-only in this demo version to prevent interference with the shared experience and live network traffic.
| Status | Disabled |
| Buffer | 0s — 0 packets, 0 B |
| Trigger Window |
+
sec
|
| Condition | Threshold |
|---|
| Rx | Tx | ||
|---|---|---|---|
| Throughput | Min | ||
| Max | |||
| Mean | |||
| Packet Rate | Min | ||
| Max | |||
| Mean | |||
| Packet Gap | Max | ||
| Mean | |||
| Sample Period | |||
| Data Samples | |||
JitterTrap is a network measurement and impairment tool for analyzing traffic patterns and simulating adverse network conditions.
| Version | ( ) |
| Commit | , |
GitHub — source code, issues, releases
Copyright © 2014–2025 Andrew Cooks and contributors.
Licensed under the GNU GPLv2.
The Charts tab displays real-time network traffic on the selected interface.
| Throughput |
Shows bitrate or packet rate over time. Use the Series dropdown to switch between:
|
| Packet Gap |
Shows the time between consecutive packets (inter-packet gap). Useful for detecting:
|
| Top Talkers |
Shows bandwidth usage broken down by flow (source/destination IP address and port). Each flow gets a unique color. Hover over the legend to see flow details. The legend also shows the current RTT for TCP flows.
Use the Y-Axis Scale toggle to switch between:
|
| TCP RTT |
Shows round-trip time (RTT) for TCP flows over time. RTT is measured using TCP sequence numbers and acknowledgements. Each flow uses the same color as in the Top Talkers chart.
Markers:
Use the Y-Axis toggle to switch between Log and Linear scale. |
| TCP Window |
Shows the TCP advertised window (receive window) for each flow over time. The advertised window indicates how much data the receiver is willing to accept. Each flow uses the same color as in the Top Talkers chart.
Event Markers:
Note: A dotted line indicates the SYN handshake was not captured, so the window scale factor is unknown. The displayed values may be smaller than actual — modern TCP typically uses scale factors of 7-8, meaning actual windows are 128-256× larger than shown. |
| Interface | Select which network interface to monitor. All charts and measurements apply to this interface. |
| Interval | Chart time resolution in milliseconds. This controls how data is aggregated for display, not the underlying sampling rate. Lower values show more detail (zoom in), higher values show longer time spans (zoom out). Common values: 100ms for general use, 10-20ms for detailed analysis, 500-1000ms for long-term trends. |
| Pause/Run | Freeze the charts to examine data, or resume live updates. While paused, data continues to be collected but not displayed. |
| Capture | Trigger a packet capture. Downloads the buffered packets as a pcap file. See Packet Capture section for details. |
Impairments simulate adverse network conditions by applying delay, jitter, and packet loss to traffic transmitted by the selected interface. This is useful for testing how applications behave under poor network conditions.
| Delay |
Adds a fixed latency to every outgoing packet, specified in milliseconds.
Example: Setting delay to 50ms simulates a network path with 50ms one-way latency (100ms round-trip if applied on both ends). |
| Jitter |
Adds random variation to the delay. Each packet gets the base delay plus or minus a random value up to this amount. The total delay cannot go negative (clamped to zero).
Example: Delay 50ms + Jitter 10ms means each packet is delayed between 40-60ms. Note: Jitter requires a non-zero Delay value to be set. |
| Loss |
Randomly drops a percentage of outgoing packets. Each packet has an independent chance of being dropped.
Example: 5% loss means each packet has a 5% chance of being dropped, regardless of other packets. |
Programs let you script sequences of impairment changes over time. This is useful for simulating varying network conditions during a test, such as gradually degrading quality or periodic disruptions. Click Add Program to create a new program with the template syntax.
JitterTrap provides deep visibility into TCP behavior through the TCP RTT and TCP Window charts. This is especially valuable when troubleshooting interactive, responsive, or real-time systems where TCP may be introducing unexpected latency or throughput problems.
These problems are particularly impactful for real-time and interactive systems. Each section describes what to look for in JitterTrap.
| Bufferbloat | |
| Symptoms | Latency increases dramatically under load. A connection that shows 20ms RTT when idle may spike to 500ms+ when saturated. Interactive applications become sluggish during bulk transfers. |
| In JitterTrap |
|
| Root Cause | Over-sized buffers in routers, switches, cable modems, or endpoint network stacks. Packets queue instead of being dropped, hiding congestion from TCP while adding latency. |
| What to Do | Enable AQM (Active Queue Management) like fq_codel or CAKE on routers. Reduce buffer sizes. Use ECN if supported. Consider if UDP would be more appropriate for your use case. |
| Receive Window Starvation | |
| Symptoms | Throughput suddenly drops to zero, then slowly recovers. The sender is blocked waiting for the receiver to consume data. |
| In JitterTrap |
|
| Root Cause | The receiving application isn't reading data fast enough. Common causes: GC pauses, disk I/O blocking, thread starvation, slow processing, or the application being suspended. |
| What to Do | Profile the receiving application. Look for blocking operations in the read path. Increase receive buffer sizes (SO_RCVBUF). Consider async I/O or dedicated reader threads. |
| Head-of-Line Blocking | |
| Symptoms | Intermittent freezes or stalls even with a good network. Data arrives in bursts after delays. Application-level latency is much higher than network RTT. |
| In JitterTrap |
|
| Root Cause | TCP guarantees in-order delivery. A single lost packet blocks all subsequent data from being delivered to the application until the retransmit arrives. This is fundamental to TCP and cannot be fixed. |
| What to Do | If your application can tolerate out-of-order or lost data, use UDP instead. For real-time media, gaming, or telemetry, UDP with application-level recovery is usually better. QUIC solves this for HTTP/3. |
| Nagle's Algorithm + Delayed ACK | |
| Symptoms | Small messages have ~40ms latency even on fast networks. Request-response protocols feel sluggish. Latency is suspiciously consistent at 40ms multiples. |
| In JitterTrap |
|
| Root Cause | Nagle's algorithm (sender) delays small packets hoping to batch them. Delayed ACK (receiver) waits up to 40ms before acknowledging. Together they create a deadlock: sender waits for ACK, receiver waits to piggyback ACK on response. |
| What to Do |
Set TCP_NODELAY socket option to disable Nagle. Ensure request-response protocols send complete messages in single writes. Some systems also allow tuning delayed ACK timeout.
|
| Congestion Window Collapse | |
| Symptoms | Throughput drops dramatically after packet loss, then slowly recovers over seconds. Classic "sawtooth" throughput pattern. |
| In JitterTrap |
|
| Root Cause | TCP's loss-based congestion control interprets any packet loss as congestion and cuts the sending rate. It then slowly probes for capacity (slow start / congestion avoidance). This is working as designed but may not suit your needs. |
| What to Do | Use BBR congestion control instead of CUBIC/Reno if available. Enable ECN to get early congestion signals. For real-time traffic, consider UDP with application-level rate control. |
| Retransmission Timeout (RTO) Stalls | |
| Symptoms | Long stalls (1-3+ seconds) followed by a burst of activity. Much worse than typical packet loss recovery. |
| In JitterTrap |
|
| Root Cause | When fast retransmit (3 dup ACKs) fails, TCP falls back to RTO-based recovery. The minimum RTO is often 200ms-1s, and it doubles with each failed attempt (exponential backoff). A lost retransmit can cause multi-second stalls. |
| What to Do | Investigate why fast retransmit is failing (tail loss, small windows). Enable TLP (Tail Loss Probe) and RACK if available. For latency-sensitive applications, these stalls may be unacceptable — consider UDP. |
| Silly Window Syndrome | |
| Symptoms | High packet rate but low throughput. Efficiency is terrible. Lots of small packets instead of full-sized segments. |
| In JitterTrap |
|
| Root Cause | Receiver advertises tiny windows (e.g., after window starvation recovery). Sender sends tiny segments to fill the advertised window. Overhead dominates. |
| What to Do | Most TCP stacks have SWS avoidance built in. If you're seeing this, check for broken or embedded TCP implementations. Increase receive buffer sizes. |
TCP is designed for reliable, ordered delivery of bulk data. These guarantees come at a cost that's often invisible until you look closely:
| TCP Behavior | Cost for Real-Time Systems |
|---|---|
| Guaranteed delivery | Stalls waiting for retransmits of data that may no longer be relevant |
| In-order delivery | Head-of-line blocking — one lost packet blocks everything behind it |
| Congestion control | Throughput collapse after loss; slow recovery; competing flows affect each other |
| Connection establishment | 1.5 RTT before first data byte; connection state on both ends |
| Flow control | Slow receiver blocks fast sender, even if data could be dropped |
Consider UDP when: You can tolerate some loss, need lowest latency, data has a "freshness" deadline, or you want application-level control over retransmission decisions.
Examples: VoIP, video conferencing, gaming, live telemetry, sensor data, financial trading, DNS.
| Healthy TCP Flow |
|
| Network Congestion |
|
| Application Bottleneck |
|
| Lossy Link |
|
Every TCP ACK includes a window field advertising how many bytes the receiver is willing to accept. This prevents a fast sender from overwhelming a slow receiver.
Sender Receiver | ---- data (1000 bytes) ----> | | <---- ACK, window=64000 ---- | "I can accept 64KB more" | ---- data (1000 bytes) ----> | | <---- ACK, window=63000 ---- | "Now 63KB" (read 0 bytes) | ---- data (1000 bytes) ----> | | <---- ACK, window=65000 ---- | "65KB" (app read 3KB)
The sender can have up to window bytes of unacknowledged data in flight. If the window shrinks to zero, the sender must stop and wait.
Window scaling: The 16-bit window field limits the window to 64KB, which is too small for high-bandwidth or high-latency links. TCP window scaling (negotiated in the SYN handshake) multiplies the field by 2scale. Modern systems use scale factors of 7-8 (128-256×), allowing windows of 8-16MB.
Besides the receive window (rwnd), TCP maintains a congestion window (cwnd) limiting how much data can be in flight based on estimated network capacity. The actual limit is min(rwnd, cwnd).
Time →
cwnd: [1] [2] [4] [8] [16] [32] [64] ... [128] ↓ [64] [65] [66] ...
| |
Slow Start (exponential) Loss detected!
↓ cwnd halved, then
Congestion Avoidance (linear)
Slow Start: cwnd starts small (1-10 segments) and doubles every RTT until loss or threshold.
Congestion Avoidance: After loss, cwnd increases by ~1 segment per RTT (additive increase).
On loss: cwnd is halved (multiplicative decrease). This creates the sawtooth pattern.
Recovery from loss is slow because TCP must re-probe for capacity. A single loss event can take seconds to recover, during which throughput is degraded.
TCP has two mechanisms to detect and recover from loss:
Tail loss is particularly problematic: if the last packets of a burst are lost, there are no subsequent packets to trigger dup ACKs, so TCP must wait for RTO. This causes multi-second stalls for what might be a single lost packet.
Nagle's algorithm (RFC 896) prevents sending small packets when previous data is unacknowledged:
if (unacked_data > 0 && new_data < MSS):
buffer new_data until ACK received
else:
send immediately
This is efficient for bulk transfers but terrible for interactive protocols. Combined with delayed ACK (receiver waits 40-200ms before sending ACK), it creates artificial latency.
Solution: Set TCP_NODELAY socket option to disable Nagle's algorithm.
ECN allows routers to signal congestion without dropping packets:
ECN provides earlier congestion signals than loss, allowing TCP to react before queues overflow. JitterTrap shows ECE and CWR markers on the TCP Window chart.
JitterTrap measures RTT by tracking TCP sequence numbers and their acknowledgements:
RTT = time(ACK received) - time(segment sent) For segment with sequence S: - Record send_time when packet with seq=S observed - Record ack_time when ACK covering S observed - RTT sample = ack_time - send_time
This passive measurement doesn't require modifying endpoints. Accuracy depends on seeing both directions of the flow. Retransmitted segments are excluded (ambiguous which transmission was ACKed).
JitterTrap automatically detects and analyzes RTP video and audio streams by passively observing network traffic. When a media flow is detected, an icon appears in the Top Talkers legend:
You can watch detected video streams directly in your browser using WebRTC:
How it works: JitterTrap captures RTP packets from the network and forwards them to your browser via WebRTC. The browser decodes and displays the video. This is a passive tap — JitterTrap observes traffic without modifying it.
Requirements:
Latency: Expect 100-500ms of latency depending on GOP size. Streams with frequent keyframes (low GOP) start faster and have lower latency.
| Type | Stream type: RTP for Real-time Transport Protocol streams, or MPEG-TS for transport streams. |
| Codec |
Video codec detected from the RTP payload:
|
| Source |
How the codec was identified:
|
| Resolution | Video frame dimensions (e.g., "1920x1080") extracted from SPS (Sequence Parameter Set) NAL units. May show "-" until an SPS is received, which typically occurs at stream start or with each keyframe. |
| Profile |
Codec profile and level (e.g., "High@L4.0"). Higher profiles support more features; higher levels support higher resolutions and bitrates.
H.264 profiles: Baseline (simple), Main (B-frames), High (best quality) H.265 profiles: Main, Main 10 (10-bit color), Main Still Picture |
| FPS |
Frames per second, calculated as a 1-second rolling average. Each unique RTP timestamp represents one video frame.
Note: Small variations (e.g., 29.97 vs 30.00 fps) are normal and reflect the source timing. |
| Bitrate |
Video stream bitrate in kbps or Mbps, calculated as a 1-second rolling average. Measures RTP payload bytes (excluding headers).
Bitrate varies with scene complexity — static scenes compress better than motion. |
| Jitter |
Packet arrival time variation, calculated as a 1-second rolling average of the mean absolute deviation from expected timing.
|
| GOP | Group of Pictures — frames between keyframes. Typical: 30-60 for streaming, 1-2 for ultra-low-latency, 250+ for broadcast. |
| Keyframes | Total IDR (H.264) or IRAP (H.265) frames observed since detection. |
| Seq Loss | RTP sequence number gaps detected, indicating lost or severely reordered packets. |
| SSRC | Synchronization Source — 32-bit identifier (hex) uniquely identifying this RTP stream. |
JitterTrap detects RTP audio streams and displays them with a ♫ icon. Click to expand and see:
| Codec |
Audio codec detected from RTP payload type:
|
| Sample Rate | Audio sampling frequency (e.g., 8 kHz for telephony, 48 kHz for high-quality audio). |
| Bitrate | Audio stream bitrate, typically 64-256 kbps depending on codec. |
| Jitter | Packet timing variance. Audio is more sensitive to jitter than video — values above 20-30ms may cause audible glitches. |
| Seq Loss | RTP sequence gaps. Even small losses can cause audible clicks or dropouts. |
| SSRC | Stream identifier. Audio and video from the same source typically have different SSRCs. |
| "Waiting for keyframe" |
Playback cannot start until a keyframe (IDR) arrives. This is normal — wait for the next keyframe, which depends on the stream's GOP setting.
Tip: Streams with GOP=30 at 30fps send keyframes every second. GOP=300 means one keyframe every 10 seconds. |
| Black screen after starting | Usually means waiting for a keyframe. Check the "Keyframes" counter — if it's increasing, one should arrive soon. |
| Playback stutters or freezes | Check Jitter and Seq Loss. High jitter (>10ms) or packet loss causes decoder stalls. The network path may be congested. |
| Green/purple artifacts | Packet loss corrupted reference frames. Wait for the next keyframe to recover, or check network for packet loss causes. |
| "Codec not supported" | Your browser cannot decode this codec. H.264 is widely supported; H.265 requires Safari or Edge. Try a different browser. |
| Video detected but no Play button | The server may not have WebRTC support enabled, or the codec couldn't be identified. Check that Resolution and Profile show values (not "-"). |
| Multiple SSRCs on same flow | The source is sending multiple streams (e.g., simulcast, or audio+video). Each SSRC appears as a separate entry in the legend. |
RTP is the standard protocol for streaming media over IP networks. Each RTP packet contains:
RTP usually runs over UDP on even-numbered ports (e.g., 5004), with RTCP control messages on the next odd port (e.g., 5005).
RTSP is a signaling protocol (like SIP for VoIP) that sets up RTP streams. An RTSP session typically:
JitterTrap passively observes RTSP signaling to learn codec information from SDP, which is more reliable than guessing from packet inspection.
A NAL (Network Abstraction Layer) unit is the basic packaging unit for H.264/H.265 video data. Think of it as an envelope that wraps different types of video information.
Video codecs need to send different types of data: configuration info, keyframes, and delta frames. NAL units provide a standard way to identify what type of data is inside, delimit where one chunk ends and another begins, and transport video over networks.
NAL unit structure:
+----------------+---------------------------+ | NAL Header | NAL Payload | | (1-2 bytes) | (variable length) | +----------------+---------------------------+
Common NAL types (H.264):
| Type | Name | Description |
|---|---|---|
| 1 | Non-IDR slice | Part of a P or B frame (needs previous frames to decode) |
| 5 | IDR slice | Keyframe — can be decoded independently |
| 6 | SEI | Supplemental info (timecodes, captions, etc.) |
| 7 | SPS | Sequence Parameter Set — resolution, profile, level |
| 8 | PPS | Picture Parameter Set — encoding settings |
Why this matters for JitterTrap:
What arrives in a typical stream:
Time → [SPS][PPS][IDR][P][P][P][P][P][P][P]...[IDR][P][P]... ↑ ↑ ↑ ↑ | | | Next keyframe (GOP boundary) | | Keyframe - playback can start here | Encoding parameters Resolution, profile, level
IP cameras typically send SPS+PPS before each IDR, so you get fresh configuration with every keyframe. This is why resolution/profile often shows "-" until the first keyframe arrives.
A single UDP flow (IP:port pair) can carry multiple RTP streams, each with a different SSRC. Common scenarios:
JitterTrap tracks each SSRC separately, so audio and video from the same source appear as distinct entries even if they share an IP/port.
For each packet, JitterTrap computes:
D = (arrival_time - prev_arrival) - (rtp_timestamp - prev_timestamp) / clock_rate
This measures the difference between actual and expected inter-packet time. The displayed jitter is the mean |D| over the last second. Unlike RFC 3550's smoothed jitter (which has an 1/16 smoothing factor), this shows actual recent variance and responds quickly to network changes.
Traps monitor traffic and trigger actions when thresholds are crossed. Use them to detect anomalies like throughput drops, traffic spikes, or excessive packet gaps.
Creating a trap:
Trap indicators:
| Grey circle | Trap has not been triggered |
| Red circle | Trap has been triggered — the threshold was crossed |
| Grey camera | Packet capture enabled, not yet triggered |
| Green camera | Packet capture was triggered (one-shot — will not trigger again until reset) |
Click the reset button to clear the triggered state and re-arm the trap. Click to delete a trap.
Shows min/max/mean statistics for the data currently displayed in the charts:
| Throughput | Bits per second (bps, kbps, Mbps, Gbps) |
| Packet Rate | Packets per second (pps) |
| Packet Gap | Time between consecutive packets in milliseconds |
| Sample Period | Current sampling interval |
| Data Samples | Number of samples in the current chart window |
Statistics are calculated over the visible chart data and reset when you change interfaces or the sample period.
JitterTrap maintains a rolling buffer of recent packets (up to 30 seconds). When you click Capture or a trap triggers, the buffered packets are saved to a pcap file and downloaded automatically.
| Status | Shows whether packet recording is active (Recording) or not (Disabled). Recording starts automatically when you select an interface. |
| Buffer | Current buffer depth in seconds, number of packets stored, and total data size. |
| Pre + Post |
Configure how much data to capture around the trigger event:
|
The downloaded pcap file can be opened in Wireshark or similar tools for detailed packet-level analysis.
This button normally downloads a pcap file containing the last 30 seconds of network traffic.
This feature is disabled in the demo to protect network privacy.