← Back to archive
Vibe-coding @xPatrick096

FeuerwehrHub

AI likelihood 88 %
Published April 11, 2026
Repo created March 22, 2026
Tech stack
  • Rust
  • Axum
  • Vanilla JS
  • SQLite
  • Docker

Assessment: Vibe-coding?

Yes, with high likelihood largely AI-generated. The signals are numerous and unambiguous.

The indicators

Speed vs. scope. The repo has existed since 22 March 2026 — 20 days. In that window 30,000 lines of code appeared: a Rust/Axum backend, a vanilla-JS frontend, 43 DB migrations, Docker setup, CI/CD, PDF generation, audit logging, a role/permission system, TOTP 2FA, vehicle management, operation reports, club administration, time tracking, a calendar with iCal export. For a solo developer in 20 days, that’s simply superhuman — especially in Rust, which isn’t a fast language to develop in.

Commit pattern. The commit log shows a textbook “prompt → paste → push” rhythm: many commits per day (18 on 28 March, 16 on 2 April, 15 on 4 April), almost all with throw-away messages like Update verein.rs, update, v1.2.1. Not a single descriptive feature commit along the lines of “Add qualification expiry warnings” or “Implement role-based module access”. That’s consistent with someone taking AI output and committing it directly.

Versioning chaos. Versions jump around wildly: v0.0.1v1.0.0.5v1.0.06.1v1.0.0.7.1v1.0.0.7.2 → then suddenly v1.1.07 (no dot) → v1.1.1 — and within two days from v1.1.x to v1.4.8. No one planned a scheme here; increments happen just to trigger new Docker images.

Framework switch mid-project. On 4 April a commit reads: “Topnav reimplemented in vanilla JS (sidebar removed)” and “Dropdown state as writable store — guaranteed reactive in Svelte 5”. The frontend was migrated from Svelte to vanilla JS mid-flight, with no trace of Svelte in the final code. Classic: AI generates in one framework, it doesn’t work, you have it regenerated in another.

Structural uniformity. Every single file follows exactly the same pattern: // ── Section Name ──────── separator comments (including perfectly aligned Unicode dashes), identical CRUD scaffolding, identical error-handling structure. That isn’t “good style” — that’s an AI reproducing a learned template. A human developer would have built an abstraction after the 10th CRUD handler, or at least a macro. Here, 76 handlers in verein.rs (2,461 lines) are spelled out one by one.

Zero TODOs, zero FIXMEs, zero console.logs. In 30,000 lines there’s not a single TODO, no FIXME, not one forgotten console.log. In genuine iterative development, that’s essentially impossible. AI-generated code comes out “finished”, without the traces of human work.

Zero commented-out code. Human developers leave commented-out code around — experiments, alternatives, dead paths. Here: nothing. Every file reads as if cast in one piece, because it was.

Unnatural breadth, missing depth. The project has an impressive feature set (vehicles, operation reports, club management, time tracking, inventory, calendar, 2FA, audit log). But depth is absent: no input validation beyond “not empty”, no rate limits on sensitive endpoints except login, no JWT revocation, TOTP secrets stored in plaintext, unauthenticated endpoints. An experienced developer would build fewer modules but lock them down. AI is happy to generate new features; it doesn’t mind the edges.

Naming and style consistency across language boundaries. The Rust code has German section comments (“Hilfsfunktionen”, “Haupt-Update-Routine”), German error messages, German variable comments — but English function names and struct fields. This specific mix (“German domain, English code”) typically emerges when you prompt in German and the AI generates English code structures with German strings.

What speaks against pure vibe-coding

The developer clearly has technical fundamentals: the Docker setup works, the Proxmox deployment is thoughtful, a CI/CD pipeline exists, and a security PR was merged. The esc() helper and security headers suggest someone at least asks the right questions. bcrypt hashing and account lockout are implemented correctly.

Conclusion

With very high likelihood the project is 80–90 % AI-generated, with a developer setting direction, prompting, and stitching the results together. The human behind it understands infrastructure (Docker, CI/CD, deployment) better than application security. This isn’t “copy-paste from ChatGPT without thinking” — but it isn’t hand-written code either. It’s what you typically see when a tech-savvy non-fullstack developer pushes an ambitious project over the line with AI assistance: impressively broad, but with the blind spots that appear when you don’t review every generated line critically.

← Back to archive