Paul Graham's "Do things that don't scale" is startup gospel.
But as engineers, we're hardwired to do the opposite. We architect for millions of users, over-engineer for edge cases, and build systems that could handle Black Friday traffic on day one.
Here's the problem: All that beautiful, scalable code is often just expensive procrastination when you're trying to find product-market fit.
After building dozens of startups (and watching most fail spectacularly despite pristine codebases), I've developed a simple rule: Every unscalable hack gets exactly 90 days to prove its worth. If it teaches me something valuable about users or product, it graduates to a real solution. If not, it dies.
This isn't about being lazy—it's about being smart with your learning budget.
The Unscalable Code Mindset
Traditional engineering wisdom says: "Build it right the first time." Startup engineering says: "Build it fast, learn fast, then build it right."
The difference? Unscalable code optimizes for learning speed, not system performance.
Every shortcut is actually an experiment:
- Will users actually need this feature?
- How do they really use the system?
- Where are the actual bottlenecks?
Clean, scalable code encodes assumptions. Hacky, unscalable code surfaces truth.
Real Examples from the Trenches
Here are five "terrible" technical decisions I made while building my SaaS platform—and why they were actually brilliant for early-stage learning:
1. The Single Server Setup
What I did: Crammed everything onto one $50/month VM. Database, web server, background jobs, Redis, file storage—all living together in beautiful, chaotic harmony.
What engineers told me: "You need load balancers! Container orchestration! Microservices!"
What I learned: My entire platform uses 2GB of RAM at peak. That Kubernetes cluster I almost built? I'd be managing empty containers and paying for resources I don't need.
The graduation plan: When I hit 500 concurrent users, I'll know exactly which components need their own servers. Until then, my "single point of failure" is actually a single point of learning.
2. Configuration as Code (Literally)
# config.py
PRICING_BASIC = 29
PRICING_PRO = 99
MAX_FILE_SIZE = 10_000_000
AI_MODEL = "gpt-4"
WEBHOOK_TIMEOUT = 30
What I did: Hardcoded every configuration value directly in the source code. Want to change the pricing? Edit the file, commit, deploy.
What engineers told me: "Build a proper config management system! Environment variables! Feature flags!"
What I learned: I've changed these values exactly four times in six months. Building a configuration service would have taken longer than all my actual config changes combined.
The hidden superpower: Every configuration change is tracked in git history. I can grep my entire codebase for any value in seconds. No wondering which environment variable controls what.
3. SQLite in Production (Yes, Really)
What I did: Used SQLite for a multi-tenant web application with paying customers.
What engineers told me: "You need PostgreSQL! What about concurrent writes? ACID transactions? Replication!"
What I learned: My database is 156MB after six months. 90% of my queries are simple reads. SQLite handles my actual load beautifully, and the lack of connection pooling complexity means I can deploy anywhere instantly.
The reality check: I spent weeks planning a complex PostgreSQL setup with read replicas. My actual usage pattern? Three heavy write operations per day, maximum.
4. The World's Simplest Deployment Pipeline
#!/bin/bash
git push origin main
ssh server "cd /app && git pull && sudo systemctl restart myapp"
What I did: One command deploys to production. No CI/CD, no staging environments, no fancy orchestration.
What engineers told me: "You need automated testing! Blue-green deployments! Rollback capabilities!"
What I learned: Deploying 30+ times per week with zero automation forced me to write better code. Every deployment is intentional. Every change is small and focused.
The unexpected benefit: I've never had a deploy break production because I'm paranoid about each change. My "lack of safety nets" created better safety habits.
5. Global State Management
# globals.py (yes, this file exists)
active_users = {}
rate_limits = defaultdict(list)
cache = {}
background_tasks = []
What I did: Used global variables for application state. Server restart = everyone logs out.
What engineers told me: "Use Redis! Proper session management! Distributed state!"
What I learned: Average user session is 12 minutes. Users don't mind logging in again after updates. The complex session management system I was planning? Complete overkill for my usage patterns.
The Three-Month Rule in Practice
Here's how I actually implement "things that don't scale" without shipping garbage:
Month 1: Ship the Hack
- Write the simplest possible solution
- Document the assumptions you're making
- Set a calendar reminder for 90 days
Month 2: Measure the Hack
- How often do you hit the limitations?
- What are users actually doing?
- Where are the real pain points?
Month 3: Graduate or Kill
- If the hack taught you something valuable about users → build it properly
- If it's causing more confusion than insight → delete it
- If it's working fine → extend the deadline
The Critical Questions
Before every "proper" rewrite, I ask:
- What did this hack teach me? If the answer is "nothing," the rewrite probably isn't worth it.
- What assumptions am I encoding? Every scalable solution bakes in beliefs about future usage.
- What's the learning cost? Sometimes the "right" solution prevents you from discovering better solutions.
Common Unscalable Patterns That Actually Work
Manual Processes
- Admin panel with literal "Approve" buttons instead of automated workflows
- Manual customer onboarding calls instead of signup flows
- Copy-paste deployment scripts instead of CI/CD
Why they work: You learn exactly what needs automation by feeling the pain first.
Hardcoded Business Logic
- Pricing tiers as constants instead of database tables
- User permissions as if/else statements instead of role systems
- Feature flags as commented code instead of proper toggles
Why they work: You discover which business rules actually change frequently.
Simple Tech Stacks
- Monoliths instead of microservices
- Server-side rendering instead of SPAs
- Direct database queries instead of ORMs
Why they work: Less abstraction means more learning about actual performance characteristics.
When Not to Do Things That Don't Scale
This approach has boundaries. Don't hack these:
Security
- User authentication and authorization
- Data encryption and privacy
- Input validation and sanitization
Legal Compliance
- GDPR, CCPA, and privacy regulations
- Financial data handling
- Industry-specific compliance requirements
Core User Experience
- Data loss or corruption
- Performance that affects user workflow
- Features that users depend on daily
The Mental Shift
The hardest part isn't writing unscalable code—it's overcoming the guilt.
We're trained to feel bad about:
- Hardcoded values
- Manual processes
- Single points of failure
- Lack of tests
- Non-DRY code
But at the earliest stages, these "problems" are actually features. They force you to stay close to your users and your product.
The reframe: This isn't technical debt. It's technical education with a clear expiration date.
Making It Work for Your Team
For Solo Founders
- Set clear graduation criteria for each hack
- Document your assumptions
- Track what you learn from each shortcut
For Small Teams
- Make the philosophy explicit in your engineering culture
- Review hacks monthly—what are they teaching us?
- Celebrate good deletions as much as good features
For Larger Teams
- Create "learning debt" tickets alongside technical debt
- Allow junior developers to ship simple solutions first
- Measure speed of learning, not just code quality
The Graduation Ceremony
The most satisfying moment isn't writing the perfect, scalable solution. It's deleting the hack that taught you what to build.
When I finally replaced my global state management with Redis, I understood exactly which data needed persistence, which could be ephemeral, and how users actually interact with sessions. The hack paid for itself in knowledge.
Your 90-Day Challenge
Pick one area of your codebase where you're over-engineering for imaginary scale:
- Identify the simplest possible solution (even if it feels wrong)
- Set a 90-day timer for graduation or deletion
- Document what you expect to learn from this approach
- Ship it and measure what actually happens
The Bottom Line
Paul Graham was right about doing things that don't scale.
But he wasn't just talking about customer acquisition and manual processes. He was talking about learning faster than your competition.
Unscalable code is a feature, not a bug—if you treat it as temporary education instead of permanent shortcuts.
The startups that win aren't the ones with the best architecture. They're the ones that learn what users actually need before their competitors do.
Your infrastructure doesn't need to scale to millions of users. Your learning needs to scale to product-market fit.
Start hacking. Set a timer. Graduate or delete.
The clock is ticking.