How a Forgotten Debug Endpoint Exposed a 271,000-Connection Proxy Empire
Unraveling Riptide: a previously undocumented proxy-as-a-service platform
A single /debug/pprof/ endpoint. That's all it took to unravel a commercial proxy-as-a-service operation processing hundreds of thousands of concurrent connections across eight servers, identify the developer's Windows username, trace the source code distribution to a Telegram channel, reconstruct the complete software architecture, and discover a co-located credential stuffing operation targeting Subway restaurant loyalty accounts.
This is the story of Riptide -- a previously undocumented proxy platform that, until today, had zero detections across VirusTotal, ThreatFox, URLhaus, and MalwareBazaar.
It Started With a Tweet
On March 30, security researcher German Fernandez (@1ZRR4H) shared six IP addresses on X. One had appeared in password spraying activity, and pivoting from it had uncovered additional servers. All six hosted identical panels on port 5000 -- a minimalist dark-themed login page titled ~ unknown with a single "Access Key" field and the subtitle "restricted access."
The infrastructure spanned two subnets: three nodes on Secure Internet LLC (US) and three on Velcom (Canada). Every panel served identical HTML -- same CSS, same fonts (JetBrains Mono + Plus Jakarta Sans), same indigo accent color. A coordinated deployment.
But the panels weren't the interesting part.
Port 666
Alongside the management panel on port 5000, several nodes exposed an additional service on port 666. A GET request to the root returned a plain 404 page not found -- Go's default HTTP response. So we checked the path that Go developers sometimes forget to disable in production:
GET /debug/pprof/ HTTP/1.1
Host: 192[.]253[.]248[.]174:666
It answered.
The Anatomy of an OPSEC Catastrophe
Go's built-in net/http/pprof package is a profiling tool designed for development. When registered on an HTTP server, it exposes runtime internals: goroutine stack traces, heap allocations, thread creation, and the binary's command-line invocation. In development, it's invaluable. In production, on the open internet, it's a complete intelligence breach.
The Binary
/root/riptide/riptide
The cmdline endpoint revealed the process name. This wasn't a known proxy tool -- not Squid, not 3proxy, not any commercial or open-source proxy we could identify. It was something custom.
The Developer
The goroutine stack traces contained full filesystem paths from the build machine:
C:/Users/Kompiuteris/Downloads/Telegram Desktop/riptide-main/internal/socketmanager/tcp.go:66
Three details jumped out immediately:
-
Kompiuteris-- the Windows username. This is Lithuanian for "Computer." Not English, not Russian, not Chinese. Lithuanian. -
Downloads/Telegram Desktop/-- the source code wasn't cloned from Git. It was downloaded through Telegram Desktop. This is a distribution model: someone sends the source (or a build) through a Telegram group or channel, and the developer downloads it to their Windows machine. -
riptide-main/-- the project name. Now we had something to search for.
The Complete Source Tree
By parsing the stack traces across 15,000+ goroutines, we reconstructed the entire source code architecture:
riptide-main/
+-- cmd/riptide/main.go
+-- internal/
| +-- acl/
| | +-- legacy/usersync.go
| | +-- reporter/report.go
| | +-- usersync/usersync.go
| +-- handlers/http/http.go
| +-- ipallocator/ipallocator.go
| +-- proxytunnel/upstream/handle.go
| +-- socketmanager/tcp.go
| +-- tracker/tracker.go
| +-- upstreamselector/builder.go
+-- pkg/
| +-- clickhouse/table.go
| +-- distlimit/distlimit.go
| +-- dnscache/dnscache.go
Thirteen source files. A clean, idiomatic Go project structure with proper separation between internal packages and public APIs. This wasn't a script kiddie's weekend project -- it was competently engineered software with dedicated packages for ACL management, bandwidth tracking, distributed rate limiting, DNS caching, and upstream proxy selection.
The Dependencies
The heap profile revealed the external module graph:
github.com/ClickHouse/clickhouse-go/v2-- ClickHouse database client for analyticsgithub.com/ClickHouse/ch-go-- Low-level ClickHouse protocol implementationgithub.com/go-redis/redis/v8-- Redis client for caching and distributed stategolang.org/x/net/internal/socks-- SOCKS5 protocol implementation
How Riptide Works
The goroutine breakdown told us exactly what was happening at the moment of capture:
| Component | Active Goroutines | Function |
|---|---|---|
| Client socket reads | 4,679 | Reading incoming proxy requests |
| Upstream data copy | 4,669 | Relaying data to/from upstream proxies |
| HTTPS-over-SOCKS5 | 2,396 | Tunneling HTTPS through SOCKS5 upstreams |
| HTTPS-over-HTTP | 2,272 | Tunneling HTTPS through HTTP CONNECT |
| SOCKS5 dials | 438 | Establishing new upstream connections |
| Connection setup | 189 | Negotiating with upstream proxies |
| Total | ~15,000 | On a single node |
The architecture is a proxy chain: customer traffic arrives at Riptide, which forwards it through upstream SOCKS5 or HTTP proxies -- likely residential or compromised endpoints. The upstreamselector package handles rotation and load balancing across the upstream pool. The tracker.CountingConn wrapper meters every byte for billing purposes. Everything gets logged to ClickHouse in batch inserts for analytics.
This is a professional proxy-as-a-service platform. Customers authenticate to the proxy ports (81, 4444, 5555, 9191) using Basic auth. When authentication fails, the error message reads: "Your username or password is incorrect or your plan expired." A subscription model.
Scale: 271,000 Concurrent Connections
Our initial capture on the US nodes showed ~15,000 goroutines per server. But when we returned to the Canadian nodes at a different time, the numbers told a different story:
| Subnet | Goroutines per Node | Nodes | Subtotal |
|---|---|---|---|
| US (192.253.248.x) | ~11,400 | 3 | ~34,200 |
| Canada (104.234.204.x) | ~59,000 | 4 | ~237,000 |
| Total | 7 | ~271,000 |
Peak observed was 85,424 goroutines on a single node. The Canadian cluster handles 5x the traffic of the US cluster. Total capacity fluctuates between 34,000 and 367,000 concurrent connections depending on time of day -- suggesting the customer base has distinct geographic usage patterns.
For context: that's more concurrent connections than most legitimate CDN edge nodes process.
The Cloned Infrastructure
The Canadian nodes shared another secret. When we collected SSH host key fingerprints:
104[.]234[.]204[.]10 -> SHA256:rrAhoYYZtnZsnA5Cz/wo08rpl7w7T+q63VEkAMx+uKs (ECDSA)
104[.]234[.]204[.]229 -> SHA256:rrAhoYYZtnZsnA5Cz/wo08rpl7w7T+q63VEkAMx+uKs (ECDSA)
104[.]234[.]204[.]230 -> SHA256:rrAhoYYZtnZsnA5Cz/wo08rpl7w7T+q63VEkAMx+uKs (ECDSA)
104[.]234[.]204[.]231 -> SHA256:rrAhoYYZtnZsnA5Cz/wo08rpl7w7T+q63VEkAMx+uKs (ECDSA)
Identical across all four nodes, all three algorithms (ECDSA, RSA, ED25519). These servers weren't provisioned independently -- they were stamped from a single VM snapshot. The operator builds one golden image and clones it. Efficient deployment, but a single fingerprint now links every node in the cluster.
Finding the 7th Node
German's original tweet listed six IPs. During our subnet scan, we discovered a 7th: 104[.]234[.]204[.]229 -- same panel, same Riptide binary, same SSH keys, same proxy ports. Missed in the initial enumeration but clearly part of the same deployment.
The Subway Connection
While scanning the Velcom subnet for additional Riptide nodes, we found something unexpected at 104[.]234[.]204[.]82 -- not another proxy node, but a Subway Kount Session Generator.
This is a credential stuffing tool purpose-built for attacking Subway restaurant loyalty accounts. It generates synthetic iOS device fingerprints (iPhone 13/14 Pro) to bypass Kount antifraud detection, crafts PKCE challenges for OAuth2 authentication, and spoofs MSAL iOS client signatures to make automated login attempts appear as legitimate Subway mobile app sessions routed through Azure AD B2C.
The co-location isn't coincidental. The operator (or their customer) uses the Riptide proxy network to distribute credential stuffing traffic across residential IP addresses while the Subway tool handles the application-layer fraud. Password spraying through a proxy-as-a-service infrastructure, with specialized tools for specific targets -- this is the modern credential stuffing supply chain in action.
The AI-Generated Panel
One detail worth noting: the management panel's design choices -- JetBrains Mono paired with Plus Jakarta Sans, an indigo accent (#6366f1), dark theme with specific CSS variable naming conventions -- are hallmarks of AI-generated UI code. The single-field authentication with no CSRF protection, Flask-style session cookies, and basic rate limiting (293-second lockout after ~25 attempts) suggest the panel was rapidly prototyped, likely with assistance from an LLM.
The backend tells a different story. Riptide's Go codebase is well-structured, with proper package boundaries, connection pooling, distributed rate limiting, and ClickHouse integration for analytics. The developer knows Go. They just didn't want to spend time on the admin panel.
The Intelligence Gap
Perhaps the most striking finding: this infrastructure had zero prior threat intelligence coverage. No VirusTotal detections. No ThreatFox IOCs. No URLhaus entries. No MalwareBazaar samples. Eight servers processing hundreds of thousands of connections daily for password spraying and credential stuffing, and not a single public report existed before this investigation.
The Riptide binary itself appears to be distributed exclusively through Telegram -- no public repositories, no dark web marketplace listings that we could identify. This is a closed ecosystem: developer to operator via private channels, with no public footprint until an exposed pprof endpoint gave it all away.
Indicators of Compromise
Network Infrastructure
| IP Address | Role | Hosting |
|---|---|---|
| 192.253.248[.]171 | Proxy node + ClickHouse | Secure Internet LLC, US |
| 192.253.248[.]174 | Proxy node | Secure Internet LLC, US |
| 192.253.248[.]175 | Proxy node + ClickHouse | Secure Internet LLC, US |
| 104.234.204[.]10 | Proxy node | Velcom, Canada |
| 104.234.204[.]82 | Subway credential stuffing | Velcom, Canada |
| 104.234.204[.]229 | Proxy node (new) | Velcom, Canada |
| 104.234.204[.]230 | Proxy node | Velcom, Canada |
| 104.234.204[.]231 | Proxy node | Velcom, Canada |
Detection Signatures
| Indicator | Value |
|---|---|
| Panel title | unknown -- access |
| Panel form field | secret_key |
| Binary path | /root/riptide/riptide |
| Go module path | riptide/internal/* |
| pprof endpoint | Port 666, /debug/pprof/ |
| Proxy auth error | "Your username or password is incorrect or your plan expired" |
SSH Fingerprints (Cloned Infrastructure)
ECDSA: SHA256:rrAhoYYZtnZsnA5Cz/wo08rpl7w7T+q63VEkAMx+uKs
RSA: SHA256:wGUqhZgPo02wxgJgHR0oxFGtFZ4bWNgEccZQ3HCpJOA
ED25519: SHA256:YTXvLbMO0yNB0kesKUTOLOgSHOeGyxO1QDCS+NFC7KU
Takeaways
For defenders: If you're seeing password spraying or credential stuffing from residential IPs, the traffic may be routing through infrastructure like Riptide. The proxy ports (81, 4444, 5555, 9191) with 407 authentication responses are detectable at the network level. The SSH fingerprints above can identify cloned nodes in other netblocks.
For developers: net/http/pprof is not a production package. If it ships in your binary, ensure the debug HTTP server is bound to localhost or behind authentication. A single exposed pprof endpoint gave us your username, your build path, your source tree, your dependencies, your runtime state, and your connection count. Everything.
For threat intelligence: This infrastructure had zero public coverage before this report. Commercial proxy services operating at this scale -- hundreds of thousands of concurrent connections -- exist in a blind spot between legitimate proxy providers and traditional malware C2. They enable abuse without being the abuse, making them difficult to categorize and easy to overlook.
The operator's OPSEC was otherwise competent. The panel had rate limiting. The proxy ports required authentication. The servers were deployed from golden images. But one forgotten debug endpoint on port 666 was enough to unravel the entire operation.
Sometimes, the devil really is in the details.
This investigation was triggered by a tweet from German Fernandez (@1ZRR4H). Additional infrastructure discovery and the Subway credential stuffing finding were produced by Breakglass Intelligence's autonomous GHOST investigation system. All evidence was captured via passive and semi-passive methods.
Breakglass Intelligence | March 31, 2026