Embracing Vibe Coding at Structify

At Structify, we’re always exploring the frontier of how software gets built. One of the most interesting (and occasionally chaotic) evolutions we’ve embraced recently is what we’ve come to call vibe coding.
What is Vibe Coding?
Vibe coding refers to the use of LLMs (like Cursor) on a semi-autonomous loop to rapidly generate large amounts of code. It’s not just prompt-and-pray — it’s about using AI coding agents to accelerate iteration.
It’s fast. It’s weirdly effective. It’s also occasionally terrifying.
When Is Vibe Coding Acceptable?
We’ve found that vibe coding works best when paired with some clear internal guardrails. At Structify, we’ve adopted the following rules of thumb when it comes to accepting a vibe coded pull request (PR):
- External Verification Required
Any LLM-assisted code must have a clear way to validate that it works — whether that’s through integration tests, visual inspection, or end-to-end interaction. If we can’t verify it easily, we don’t ship it. - Avoid Core Algorithms and Infrastructure
Vibe coding is not for foundational components. If others are going to build on top of a piece of code, it needs to be deeply understood, deliberate, and handcrafted by an engineer — with LLMs in an assistant role only.
This approach has allowed us to spin up and iterate on smaller product lines quickly. However, it's not without tradeoffs. Some of our largest bugs have come from overly ambitious AI-generated code. These PRs often balloon to 10,000+ lines, shifting the burden from the original writer to the reviewer, who’s left to make sense of massive auto-generated changes.
So while the pace can be exciting, our policy has evolved to emphasize responsible use: if it’s a quick internal tool or visual feature and the output can be easily verified by "just using it" — go ahead and vibe code it. If it’s infrastructure, algorithms, or shared dependencies — do the hard thing and write it yourself (with help if needed).
Rethinking Technical Interviews: Code Review Over Code Writing
One positive side effect of adopting AI in our workflows is that it’s changed how we think about engineering interviews.
We’re no longer big fans of traditional “code-on-a-whiteboard” interviews or time-limited LeetCode-style sessions. Instead, we’re implementing code review interviews — where candidates review real code, identify issues, suggest improvements, and discuss tradeoffs.
Why? Because in the real world, most engineers don’t start from scratch anymore. They work with existing codebases, rely on AI assistance, and spend a significant amount of time reviewing and reasoning about code — not just writing it.
Even better, we allow candidates to use tools like ChatGPT and Cursor during the process, just like they would on the job. It’s a far more accurate representation of how engineers actually work today, and it lets candidates show off their critical thinking, not just their typing speed.
It's where the future is heading, why fight it? We embrace it.
-Alex Reichenbach & Ronak Gandhi, co-founders