Not Every App Needs the Same Checks: How to Know What Yours Needs
A landing page and a payments app don't need the same security review. Here's how to know what your vibe-coded app actually needs before you ship it.

Not Every App Needs the Same Checks: How to Know What Yours Needs
A blog-template Lovable app and a fintech app on Stripe should not get the same security review. They almost always do. That is why most security reports for vibe-coded apps look the same: 60 pages of generic findings, no clear answer to "what do I actually fix first."
This is the foundation post for everything we publish at NEKOD. If you read one piece of ours, read this one. Because the question every builder is asking is not "is my code secure?" The question is "given what my app does, is it doing the right things the right way?"
Those are different questions. The first one has 500 generic answers. The second one has five answers that actually matter for you.
Why "scan everything" is the wrong starting point
Most security tooling came out of large engineering organisations where the same app stack runs at scale across hundreds of services. In that world, running the same 500 checks against every service makes sense. The cost of a false positive is low. The cost of missing one bug is huge.
Vibe-coded apps are not that. A solopreneur on Lovable has built one app. They are not running a fleet. They have a finite amount of time, a finite amount of money, and a single product they actually need to ship. The question is not "are there 200 issues in this code." There always are. The question is "which of those 200 issues are blockers for THIS app, given what it does."
When a generic scanner returns 200 findings on an app that has no users and no database, 195 of those findings are noise. When the same scanner returns 200 findings on a fintech app handling Stripe webhooks and PII, three of them are critical and the other 197 are still noise. The signal is the same in both cases. What changes is which findings count.
That is what context-driven risk means. Same scanner, two completely different answers, because the app underneath is doing two different jobs.
The three dimensions of context
When NEKOD assesses an app, we look at three things first, before any code-level checks run:
What does the app do? A static marketing site, a CRUD internal tool, a consumer SaaS with auth and payments, a B2B integration with someone else's API. The functional category sets the floor for what matters. Auth flows matter for anything with user accounts. Webhook signature verification matters for anything taking money. Privacy policies matter for anything storing personal data. None of those matter for a static landing page.
Who uses it? Only you, ten people in your company, an open consumer audience, regulated B2B customers. The audience determines the regulatory exposure. A tool used by ten internal employees in the same country has a different compliance posture than a public SaaS with EU users. GDPR Article 25 is not optional for the second one. It does not apply at all to the first one.
What data does it touch? No data, only your own data, anonymous user data, identifiable user data, payment data, health data. Sensitivity sets the priority order. A leaked email address is a privacy issue. A leaked password is a notification event under GDPR. A leaked card number is a PCI incident with a clock attached. The data class decides which finding moves to the top of the list.
Three dimensions, multiplied together, give you the risk profile. Not the score. The profile. Because the right answer is not a number, it is "here are the five things that actually matter for your app."
Three apps, three completely different lists
Same builder, same toolchain (Lovable plus Supabase plus Stripe), three apps. Here is what context-driven looks like in practice.
App one is a personal portfolio site. No accounts, no database, no payments. The "must fix" list has one item: don't paste API keys into the AI chat history, because chat history can leak (see Lovable's April 2026 breach for the cleanest example of why). Everything else is optional polish.
App two is an internal expense-tracking tool used by twelve employees at the same company. Now we care about authentication (so external strangers can't log in), basic Row Level Security on Supabase (so employees can't see each other's expenses unless that's the design), and credential hygiene (so a developer's hard-coded test key doesn't end up in prod). Five items on the must-fix list. The 195 generic findings about, for example, missing security headers on a public CDN are still noise.
App three is a consumer SaaS with paying users, Stripe checkout, and email-based auth. Now everything escalates. RLS is no longer "nice to have," it is a legal exposure. Webhook signature verification is mandatory. Password storage and session handling have to be right. A privacy policy and a working consent flow are launch blockers under GDPR. The must-fix list is fifteen items. And, importantly, a different fifteen items than the internal tool's list, because the external attack surface and the regulatory exposure are different.
Same scanner output. Same builder. Three apps. Three completely different "what to ship next" answers. That is the difference between a checklist and a context-driven assessment.
The five questions every builder can ask before launch
You don't need a tool to start thinking this way. You can ask yourself five questions about any app you've built:
- What is the worst-case bad day for my users if this app is breached? Embarrassment, financial loss, identity exposure, regulated data leakage. The answer determines how much governance the app needs.
- Who actually has access? Only me, my team, the public, regulated customers. The audience drives both the threat model and the compliance posture.
- What is the most sensitive piece of data this app stores or handles? Free-text feedback is one tier. Email addresses are another. Card numbers are a different conversation.
- What happens if my AI chat history leaks? Did I paste any real keys, real customer data, or real credentials into prompts while building? If yes, that is now part of the threat model.
- What does my platform actually do for me, and what doesn't it? Lovable does not write your privacy policy. Replit does not run your DPIA. Supabase does not enable RLS by default on tables you create. Whatever the platform doesn't do is your job.
If you can answer those five honestly, you already know more about your app's real risk profile than most generic scan reports will ever tell you. We covered specific patterns in The 5 Security Gaps Hiding in Every Vibe-Coded App, which extends the same lens to QA and production readiness.
Key takeaways
- Generic scanners run the same 500 checks on every app. Most of those findings are noise for any individual app.
- Context-driven risk depends on three dimensions: what the app does, who uses it, and what data it touches.
- The same code base produces three different "must fix" lists for a portfolio site, an internal tool, and a paid SaaS. Context decides priority.
- You can start thinking context-first by answering five questions about your app before launch.
- The job of governance is to tell you what matters for YOUR app, not to hand you a 60-page PDF of generic findings.
Get a Context-Driven Review of Your App
NEKOD's 360° review starts by asking what your app does, who uses it, and what data it touches. Then it runs the checks that actually matter, in the priority order they matter, and gives you a Launch Readiness Score that reflects your specific app, not a generic baseline. Free for the first scan.

