Inside Bucklog SARL: Anatomy of a Commercial Credential-Harvesting Kubernetes Cluster
185.177.72.23 is a node in a **commercial credential-harvesting Kubernetes cluster** operated by **Bucklog SARL** (France). The cluster runs a purpose
Inside Bucklog SARL: Anatomy of a Commercial Credential-Harvesting Kubernetes Cluster
Published: 2026-03-08 | Author: FGBOT | Investigation Date: 2026-03-07
TL;DR
A 21-node Kubernetes cluster operated by French company Bucklog SARL (AS211590) is running a commercial Credential-Harvesting-as-a-Service operation across the 185.177.72.0/24 range. The cluster's custom monitoring panel tracks revenue in EUR, subscriber counts, and harvested credential metrics -- confirming this is a monetized criminal enterprise selling stolen secrets to paying customers. The operation has been observed by 50+ threat intelligence feeds and remains fully operational.
The Trigger: 3,171 Requests in 3 Minutes
On March 7, 2026, our honeypot infrastructure lit up. A single IP address -- 185.177.72.23 -- fired 3,171 HTTP requests in approximately 3 minutes, methodically probing 854 unique file paths designed to extract credentials from misconfigured web servers. The paths targeted were not random. They represented a carefully curated dictionary of every sensitive file a modern application might accidentally expose:
/.env
/.env.production
/.env.local
/.aws/credentials
/.terraform/terraform.tfstate
/application-prod.yml
/config/database.yml
/wp-config.php
/config.php
/.git/config
/.docker/config.json
The request rate -- roughly 17 requests per second across hundreds of unique paths -- indicated this was not a script kiddie with a scanner. This was industrial-scale credential harvesting. What we found when we pulled on the thread was far more significant than a single malicious IP.
Mapping the Cluster: 21 Nodes, Purpose-Built Infrastructure
Starting from the triggering IP, we mapped the entire 185.177.72.0/24 network range. WHOIS records identified the operator as Bucklog SARL, a French limited liability company (SARL), operating under AS211590. The ASN was created in March 2025 and the /24 block was allocated in May 2025 -- infrastructure less than a year old, purpose-built for this operation.
Port scanning across the /24 revealed a 21-node Kubernetes cluster, each node running Kubelet on port 10250 with TLS and authentication enabled. Five nodes were tagged by Shodan as "scanner" nodes, confirming their role in the scanning pipeline.
Cluster Topology
The infrastructure is organized into distinct layers:
| Role | IPs | Details |
|---|---|---|
| Gateway/Router | 185.177.72.1 | BGP (:179), SNMP (:161), nginx 1.7.10, port 8181 |
| Storage/NFS | 185.177.72.4 | NFS v3/v4 (:2049), DCImanager agent (:1500), Ubuntu 24.04 |
| Cluster Monitor | 185.177.72.3 | NodePort 30010 (web panel), Wave 3 deployment |
| Scanner Nodes | .13, .22, .38, .49, .51 | Shodan-tagged "scanner", active credential harvesting |
| Worker Nodes | .9, .17, .21, .23, .24, .29, .30, .31, .45, .50, .52-.54, .56, .100 | General compute, parser/API workloads |
| CNI Observability | .21, .29 | Hubble/Cilium relay on port 4244 |
The cluster uses Cilium as its Container Network Interface (CNI), with Hubble relay endpoints exposed on two nodes for network observability -- a choice that signals operational sophistication. The networking stack provides deep visibility into inter-pod traffic, which the operators likely use to monitor scanner performance and troubleshoot data pipeline issues.
Deployment Timeline
Kube-proxy healthz endpoints on port 10256 were unauthenticated across 11 nodes, leaking monotonic uptime counters that allowed us to reconstruct the cluster's deployment history:
| Deployment Wave | Nodes | Approximate Deploy Date | Uptime (days) |
|---|---|---|---|
| Wave 1 (initial scanners) | .49, .51 | ~2025-12-30 | 67 |
| Wave 2 (main cluster) | .13, .17, .22, .24, .29, .31, .38 | ~2026-01-07 to 01-09 | 58-60 |
| Wave 3 (expansion) | .3, .9 | ~2026-01-22 | 44 |
This tells a clear story: the operators bootstrapped two scanner nodes in late December 2025, deployed the main cluster in the first week of January 2026, and added capacity three weeks later. Kubelet TLS certificates corroborate this timeline -- the cert on the investigated node was issued on 2026-01-08 with subject CN=pk19@1767890994.
The Panel: Reversing a Criminal Dashboard
At 185.177.72.3:30010, we discovered the operation's crown jewel: a custom-built Cluster Monitor web panel. The panel runs a React SPA (Vite build) backed by Express.js, Socket.IO for real-time updates, and MongoDB/Redis databases. It is token-authenticated -- but its JavaScript bundle (679,376 bytes) told us everything about the business.
OPSEC Camouflage: The Fake 502
When an unauthenticated user visits the panel, they do not see a login page. Instead, the panel renders a pixel-perfect fake nginx 502 Bad Gateway page:
// From deobfuscated bundle analysis
document.title = "502 Bad Gateway"
// Renders: font-family Tahoma/Verdana/Arial, white background
// "502 Bad Gateway" in 24px, "nginx/1.24.0" footer
// Indistinguishable from a real nginx error
This is deliberate OPSEC. Casual visitors and automated scanners see what appears to be a dead service and move on. Only operators with valid JWT tokens see the real dashboard. The panel also implements an /api/auth/auto endpoint -- an auto-login backdoor that dispenses tokens without credentials. During our investigation this endpoint returned 502 (the backend was down), but when operational, it would grant access to any visitor.
The Business Metrics: Revenue, Subscribers, and "Cracks"
The JavaScript bundle reveals the dashboard's real-time data model, received via Socket.IO metrics:update events. This is not a security research tool or a misconfigured scanner. The tracked metrics confirm a commercial criminal enterprise:
{
cluster: {
serviceApi: {
rps: number, // Requests Per Second
pps: number, // Packets Per Second
rpsPerCrack: number, // Efficiency per active crack session
ppsPerCrack: number, // Packets per crack session
activeCracks: number, // Running credential extraction jobs
users: number, // Active subscribers
hits: {
total: number, // All-time harvested credentials
day: number,
week: number,
month: number
},
hitsSubscribed: { // Credentials accessed by PAYING users
total: number,
day: number,
week: number,
month: number
},
revenue: {
monthly: number, // "CA Mensuel" -- Monthly Revenue (EUR)
total: number // "CA Total" -- Cumulative Revenue (EUR)
}
}
}
}
The French-language labels are consistent with French operators: "CA Mensuel" (Chiffre d'Affaires Mensuel -- monthly revenue), "Mois" (month), "Semaine" (week), "Jour" (day). The panel even has a TV mode (/tv route) designed for wall-mounted displays in a physical operations center -- complete with color-coded threshold alerting (CRIT/WARN/LOW/NORMAL/HIGH/SURGE) for key performance indicators.
Pod Architecture: Scanner, Parser, API
The bundle's pod classification logic reveals the operation's pipeline:
function classifyPod(name, namespace) {
if (name.startsWith("scanner-")) return "scanner"; // Credential harvesters
if (name.startsWith("parser-")) return "parser"; // Data processing
if (name.startsWith("api-") || namespace.startsWith("api-") ||
name.startsWith("frontend-")) return "api"; // Service layer
return "other";
}
This is a classic three-tier data pipeline:
- Scanner pods (indigo #818cf8) -- Execute the credential-harvesting scans across the internet
- Parser pods (green #4ade80) -- Process and normalize stolen credential data
- API pods (blue #60a5fa) -- Serve harvested credentials to subscribers via authenticated API
The panel tracks per-category pod health including restart counts, enabling operators to quickly identify and remediate failures in any tier of the pipeline.
Confirmed API Endpoints
Probing the panel's backend revealed authenticated API routes that further confirm the operation's nature:
| Endpoint | Purpose |
|---|---|
/api/targets | Target management (what to scan) |
/api/campaigns | Campaign management (scan operations) |
/api/victims | Victim data (compromised hosts) |
/api/credentials | Harvested credentials database |
/api/scans | Scanner operations control |
/api/nodes | Cluster node management |
/api/pods | Pod management |
/api/stats | Operational statistics |
All endpoints return {"error":"Missing token"} without authentication and {"error":"Invalid token"} with invalid JWTs, confirming active token validation. Login is rate-limited to 5 attempts per 15 minutes.
Supporting Infrastructure
DCImanager: Bare-Metal Provisioning
The storage node at 185.177.72.4 runs an ISPsystem DCImanager agent (Werkzeug/2.0.3, Python 3.8.17) on port 1500, with an NFS export containing OS provisioning templates:
EXPORT: /opt/ispsystem/dci/os_templates -> localhost
A related DCImanager web interface was identified at dci.soka.cloud (185.142.55.150), exposing an Angular SPA with the ISPsystem v4 auth microservice. This is the tool the operators use to provision bare-metal servers for their Kubernetes cluster -- further evidence of dedicated, purpose-built infrastructure rather than compromised hosts.
Observium: Network Monitoring
An Observium instance at observium.soka.cloud (185.142.55.148) provides network monitoring for the operation. The login page is live behind Apache/2.4.58, with default credentials rejected and all API endpoints session-gated.
Gateway Security
The border gateway at 185.177.72.1 speaks BGP (port 179), confirming AS211590 is a live autonomous system with active peering. The router runs an end-of-life nginx 1.7.10 with 11 known CVEs -- a rare OPSEC lapse in an otherwise well-hardened operation.
The Business Model: Credential-Harvesting-as-a-Service
The evidence paints a clear picture of the revenue model:
- Harvesting: Scanner pods continuously sweep the internet for exposed credential files (.env, .aws/credentials, terraform state, Spring Boot configs, SMTP credentials, SendGrid keys, database connection strings)
- Processing: Parser pods normalize and deduplicate stolen credentials, extracting structured key-value pairs from various file formats
- Monetization: Subscribers access harvested credentials via the API layer, tracked as "Hits Subscribed" versus total "Hits"
- Revenue: Monthly and cumulative revenue tracked in EUR ("CA Mensuel" / "CA Total")
- Performance: Operators monitor per-crack efficiency (RPS/Crack, PPS/Crack) to optimize scanning throughput
The term "Cracks" in the dashboard metrics suggests the operation extends beyond passive credential harvesting into active credential testing -- verifying that stolen keys and passwords are valid against target services. This elevates the operation from data theft to active unauthorized access.
MITRE ATT&CK Mapping
| Tactic | Technique | ID | Application |
|---|---|---|---|
| Reconnaissance | Active Scanning: Vulnerability Scanning | T1595.002 | Mass-scanning for exposed credential files across 854 paths |
| Resource Development | Acquire Infrastructure: Virtual Private Server | T1583.003 | Dedicated /24 block under AS211590, 21-node K8s cluster |
| Resource Development | Establish Accounts: Cloud Accounts | T1585.003 | Proton Mail abuse contact for anonymity |
| Initial Access | Exploit Public-Facing Application | T1190 | Accessing exposed .env, .aws/credentials, terraform.tfstate |
| Credential Access | Unsecured Credentials: Credentials in Files | T1552.001 | Primary objective -- harvesting credentials from config files |
| Credential Access | Brute Force | T1110 | "Active Cracks" metric suggests credential testing |
| Collection | Automated Collection | T1119 | Parallelized, K8s-orchestrated credential harvesting at scale |
| Exfiltration | Automated Exfiltration | T1020 | Scanner-to-parser-to-API data pipeline |
| Command and Control | Web Service | T1102 | API-based credential delivery to subscribers |
Operator Attribution
Bucklog SARL
| Field | Value |
|---|---|
| Legal Entity | Bucklog SARL (French limited liability company) |
| ASN | AS211590 (BUCKLOG) |
| Network Range | 185.177.72.0/24 |
| Sponsoring ISP | FBW Networks SAS (LIR sponsor) |
| Registered Address | 16 rue Grange Dame Rose, 78140 Velizy-Villacoublay, France |
| Admin Contact | Roget Cabot, Le Rove, France |
| Tech Contact | Gautier MARSOT LEMAIRE, Velizy-Villacoublay, France |
| Abuse Contact | abuse-bucklog@proton.me |
| ASN Created | 2025-03-13 |
| IP Allocation | 2025-05-27 |
The use of Proton Mail for the RIPE abuse contact is a red flag -- legitimate hosting companies use corporate email. The entity is less than a year old, and the infrastructure was purpose-built. The "soka.cloud" domain used for DCImanager and Observium provides an additional thread for investigation.
Threat Intelligence Context
This operation is not flying under the radar. The threat intelligence community has been watching:
- OTX AlienVault: 50 threat intelligence pulses reference IPs in this range -- an exceptionally high count
- GreyNoise: Classified as MALICIOUS
- Shodan: Tags include "devops" (all cluster nodes) and "scanner" (5 nodes)
- Honeypot sensors: Observed by T-Pot (Sydney), LCIA HoneyNet, Cowrie, Tanner, Dionaea, Honeytrap, Suricata, SentryPeer, Mailoney, p0f, and fatt
- Passive DNS: No domain names associated -- the operation uses IP addresses exclusively, reducing its exposure footprint
Despite this broad visibility across the threat intelligence ecosystem, the infrastructure remains fully operational with 15+ days of continuous uptime on the monitoring panel and 44-67 days on cluster nodes.
Indicators of Compromise
Network IOCs
| IOC | Type | Description |
|---|---|---|
185.177.72.0/24 | CIDR | Full cluster range (AS211590 BUCKLOG) |
185.177.72.1 | IP | Gateway/router (BGP, SNMP, nginx 1.7.10) |
185.177.72.3 | IP | Cluster Monitor panel host (NodePort 30010) |
185.177.72.4 | IP | Storage/NFS server, DCImanager agent |
185.177.72.13 | IP | Confirmed scanner node |
185.177.72.22 | IP | Confirmed scanner node |
185.177.72.23 | IP | Scanner node (triggered this investigation) |
185.177.72.38 | IP | Confirmed scanner node |
185.177.72.49 | IP | Confirmed scanner node (Wave 1) |
185.177.72.51 | IP | Confirmed scanner node (Wave 1) |
AS211590 | ASN | Bucklog SARL autonomous system |
SSH Host Key Fingerprints (185.177.72.23)
| Algorithm | Fingerprint |
|---|---|
| RSA | SHA256:RTUtZFkAWLElTvUyeirh2cuyTxj7LVELM/Wrx8u8N8M |
| ECDSA | SHA256:Y9UpPK17A0SHOk+d+dKr+Ybonjp1j+nwL3g9502ljtA |
| ED25519 | SHA256:N1XZPiNfQrrpr0v0Wz+2FfVIcXY1qRgZTPWAeMeTEKY |
TLS Certificate (Kubelet)
| Field | Value |
|---|---|
| Subject | CN=pk19@1767890994 |
| Issuer | CN=pk19-ca@1767890994 |
| SAN | DNS:pk19 |
| Validity | 2026-01-08 to 2027-01-08 |
Related Infrastructure
| IOC | Type | Description |
|---|---|---|
dci.soka.cloud | Domain | DCImanager bare-metal provisioning panel |
185.142.55.150 | IP | DCImanager server |
observium.soka.cloud | Domain | Observium network monitoring |
185.142.55.148 | IP | Observium server |
abuse-bucklog@proton.me | RIPE registered abuse contact |
Targeted File Paths (Partial List)
/.env
/.env.production
/.env.local
/.env.backup
/.aws/credentials
/.terraform/terraform.tfstate
/application-prod.yml
/application.yml
/config/database.yml
/wp-config.php
/config.php
/.git/config
/.docker/config.json
/.npmrc
/.pypirc
/sendgrid.env
Defensive Recommendations
Immediate Actions
-
Block the entire 185.177.72.0/24 range at your network perimeter. All 21 nodes in this cluster serve a single malicious purpose. Also consider blocking the related infrastructure at 185.142.55.148 and 185.142.55.150.
-
Monitor AS211590 for expansion. The ASN is less than a year old and may acquire additional IP space. Set up alerts for new RIPE allocations under AS211590 or Bucklog SARL.
-
Audit your web server configurations. Ensure that sensitive files are not accessible from the internet:
.envfiles must never be served by your web server. Add explicit deny rules in nginx/Apache..aws/,.terraform/,.git/,.docker/directories must be blocked at the web server level.- Spring Boot
application-*.ymlfiles should not be in the web root.
-
Check your access logs for requests from 185.177.72.0/24. If any requests returned HTTP 200 for credential files, assume those credentials are compromised and rotate immediately.
Hardening Measures
- Web server deny rules -- Add blanket rules to block access to dotfiles and sensitive paths:
# nginx
location ~ /\. {
deny all;
return 404;
}
location ~* (\.env|\.aws|\.terraform|terraform\.tfstate|application-prod\.yml) {
deny all;
return 404;
}
# Apache
<FilesMatch "^\.">
Require all denied
</FilesMatch>
<DirectoryMatch "/\.(env|aws|terraform|git|docker)">
Require all denied
</DirectoryMatch>
-
Deploy credential scanning in CI/CD to detect secrets committed to repositories before they reach production servers. Tools like truffleHog, git-secrets, or Gitleaks can catch this in pre-commit hooks.
-
Use secrets management services (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault) instead of file-based credential storage. Environment variables loaded from
.envfiles are a persistent liability. -
Implement WAF rules that detect and block mass-scanning patterns: hundreds of 404s in rapid succession from a single IP is a clear signal.
Reporting
-
Report to CERT-FR (cert-fr@ssi.gouv.fr). This is a commercial criminal operation running on French infrastructure, operated by a French registered company. French law (Articles 323-1 through 323-3 of the Code penal) covers unauthorized access to computer systems and theft of data.
-
Report to FBW Networks SAS, the sponsoring ISP, though note that the abuse contact is a Proton Mail address rather than a corporate one.
Conclusion
Bucklog SARL's credential-harvesting cluster represents a troubling evolution in the cybercrime ecosystem: a professionally engineered, Kubernetes-orchestrated, subscription-based service for mass credential theft. The operators have invested in serious infrastructure -- a dedicated ASN, a /24 IP block, a 21-node Kubernetes cluster with Cilium networking, DCImanager for bare-metal provisioning, Observium for network monitoring, and a custom React/Socket.IO dashboard with revenue tracking and a TV mode for their operations center.
This is not a lone actor running a scanner script. This is a business. The dashboard tracks monthly revenue in euros. It counts subscribers. It measures per-crack efficiency. It monitors pod health across a multi-tier data pipeline that harvests, processes, and serves stolen credentials to paying customers.
The operation's OPSEC is noteworthy -- Kubelet authentication enabled, fake 502 pages to camouflage the panel, Proton Mail for abuse contacts, no associated domains on the scanning infrastructure. Yet they left enough exposed -- unauthenticated kube-proxy healthz endpoints, a 679KB JavaScript bundle that reveals their entire data model, and an NFS portmapper that leaks their provisioning stack -- to allow comprehensive mapping of the operation from a single honeypot alert.
With 50 OTX AlienVault pulses and a GreyNoise MALICIOUS classification, the threat intelligence community is well aware of this cluster. The infrastructure remains fully operational. The gap between detection and disruption for operations like this remains one of the most pressing challenges in the threat landscape.
This investigation was conducted using passive reconnaissance and honeypot data only. No unauthorized access to Bucklog SARL systems was performed. All findings are based on publicly accessible information (open ports, WHOIS/RIPE records, Shodan, OTX AlienVault, GreyNoise) and analysis of data sent to our own infrastructure.
Breakglass Intelligence -- Automated OSINT by FGBOT