Technical due diligence happens when investors evaluate companies for funding or acquirers evaluate companies for acquisition. The goal is understanding technical capabilities, risks, and sustainability—whether the technology can deliver on business promises.
I’ve been on both sides: preparing companies for due diligence and conducting due diligence for investors. Here’s what actually matters and how to prepare.
What Due Diligence Seeks to Understand
Due diligence isn’t about finding perfect code. It’s about assessing technical risk and capability in the context of business goals.
Core Questions
Can the technology do what the business claims? If the company claims AI-powered personalization, is there actually meaningful AI, or is it simple rules dressed up? If the platform claims to handle millions of users, can it?
Is the technology sustainable? Can the team maintain and extend the codebase? Is technical debt manageable? Are there single points of failure in people or systems?
What are the risks? Security vulnerabilities? Scalability limits? Dependency on specific individuals? Regulatory compliance gaps?
What investment is needed? To reach the next milestone, what technical work is required? Are the estimates realistic?
Context Matters
A seed-stage startup has different expectations than a Series C company. Evaluators expect:
Early stage: Scrappy code is fine. Emphasis on founder technical capability, product-market fit validation, and team’s ability to iterate. Technical debt is acceptable if deliberate.
Growth stage: Codebase should be organized. Basic security and scalability. Team can execute without founder writing all the code. Technical practices that support team growth.
Late stage/acquisition: Professional engineering organization. Security maturity. Scalability proven. Technical documentation. Reduced bus factor.
Preparing for Due Diligence
If you’re preparing your company for evaluation:
Documentation
Have documentation ready:
Architecture overview: System components, how they interact, data flows. Doesn’t need to be elaborate; a few diagrams and a written summary suffice.
Technology stack: What languages, frameworks, databases, and services you use. Why you chose them.
Infrastructure: Where things run, how they’re deployed, monitoring and alerting setup.
Team structure: Who does what. How work is organized. Reporting relationships.
Development process: How features go from idea to production. Testing practices. Deployment frequency.
Evaluators will request this information. Having it prepared demonstrates organization and speeds the process.
Honest Assessment
Know your weaknesses before evaluators find them:
- Where is technical debt highest?
- What security issues exist?
- What would break at 10x scale?
- Who is a single point of failure?
- What hasn’t been maintained?
Evaluators appreciate honesty about known issues more than discovering hidden problems. “We know the auth system needs refactoring; here’s our plan” is better than pretending it’s fine.
Code Quality
While due diligence isn’t a code quality audit, evaluators will examine code:
- Is it readable and organized?
- Are there tests? Do they pass?
- Is there obvious security issues?
- Is the codebase consistent?
You don’t need to refactor everything before due diligence. But fix obvious issues: commented-out code, exposed credentials, clearly broken tests.
Security Basics
Security issues discovered in due diligence create significant concern. At minimum:
- No credentials in source code
- Authentication on administrative interfaces
- Basic input validation
- Dependency vulnerabilities addressed
- Security incident response plan exists
If you’ve had penetration testing, share the report and remediation status.
Intellectual Property
Ensure IP is clean:
- Employees have signed IP assignment agreements
- No copy-pasted code from previous employers
- Open source licenses are compatible with your business model
- Third-party code is properly licensed
IP issues discovered during due diligence can delay or kill deals.
Conducting Due Diligence
If you’re evaluating another company:
Start with Architecture
Request an architecture review session:
- How does the system work at a high level?
- What are the major components?
- How do they communicate?
- Where does data live?
- What are the external dependencies?
Architecture reveals more about technical maturity than code quality. A well-architected system with rough code is better than clean code in a poorly designed system.
Assess the Team
Technology is built by people. Understand:
- Who are the key technical people?
- What’s the bus factor for critical systems?
- How long have people been there?
- What’s the team’s experience level?
- How do they handle disagreements?
Interview key engineers individually. Their explanations of the system reveal technical depth and communication ability.
Review Code Selectively
Don’t try to review all code. Sample strategically:
- Security-sensitive code: Authentication, authorization, payment processing
- Core business logic: The differentiating functionality
- Recent commits: Current coding standards
- Old code: How they handle legacy systems
Look for:
- Basic security hygiene
- Test coverage and test quality
- Code organization and readability
- Consistency across the codebase
Examine Infrastructure
How is the system operated?
- Deployment process and frequency
- Monitoring and alerting
- Incident response
- Backup and recovery
- Infrastructure as code?
Infrastructure maturity correlates with operational reliability.
Probe Scalability
If the business plan projects significant growth:
- What’s the current scale?
- What’s been tested?
- Where are the bottlenecks?
- What changes are needed for 10x growth?
Be skeptical of “we’ll just add more servers.” Scaling usually requires architectural changes.
Check Security Posture
Security due diligence includes:
- Has there been security testing?
- What’s the vulnerability management process?
- How is access controlled?
- How is sensitive data protected?
- Is there a security incident response plan?
Request previous penetration test reports if available.
Technical Debt Assessment
All codebases have technical debt. Assess:
- Is debt acknowledged and tracked?
- Is there a strategy for managing it?
- Is debt localized or pervasive?
- What’s the trajectory (improving or worsening)?
Red Flags
Certain findings raise serious concerns:
No version control: This is almost disqualifying. It suggests fundamental process immaturity.
No tests: No automated testing means changes are risky and velocity will slow.
Credentials in code: Indicates poor security practices and potential exposure.
Key-person dependency: One engineer knows everything; others know little. Risk of total loss if that person leaves.
No deployment process: Manual, ad-hoc deployments suggest unreliable operations.
Defensive or vague answers: Unwillingness to discuss weaknesses suggests hiding problems.
No documentation and no one can explain the system: If the team can’t explain their own system, they may not understand it.
Yellow Flags
Concerns worth investigating but not disqualifying:
Technical debt: Expected, but needs management plan.
Outdated dependencies: Common, but security implications need assessment.
Monolithic architecture: Not wrong, but scalability needs discussion.
Limited monitoring: Problematic but fixable.
Junior team: Can work with good leadership and realistic expectations.
Reporting Findings
Due diligence findings should be:
Contextualized: Findings relative to company stage and market. Perfect code isn’t the expectation for seed-stage startups.
Prioritized: What are the critical risks versus minor concerns?
Actionable: What should be done about findings? What’s the investment required?
Balanced: Include strengths, not just weaknesses. Investors need the full picture.
A good due diligence report enables informed decisions—not just “go/no-go” but understanding of technical risks and required investments.
Key Takeaways
- Due diligence assesses technical risk and capability in business context, not code perfection
- Prepare documentation, honest self-assessment, and clean up obvious issues
- Architecture and team assessment reveal more than code review
- Sample code strategically: security-sensitive areas, core logic, recent and old code
- Red flags include no version control, no tests, credentials in code, and key-person dependency
- Report findings with context, prioritization, and actionable recommendations