Technical due diligence is where most acquisitions of software companies go quietly wrong. The financials get a forensic accountant, the legal docs get three lawyers, the commercial assumptions get a strategy partner — and the technology gets a CTO friend of the deal partner who spends an afternoon poking around the GitHub repo. Then the deal closes, and 18 months later the acquirer is paying for a system they thought they were buying but can't actually maintain, scale, or sell to their existing customers.
We've run technical DD on ten software acquisitions in the last three years. Three of those deals collapsed after our report — once because the codebase was effectively unmaintainable, once because customer data was held in a way that wouldn't survive a GDPR audit, once because the key technical staff had quietly accepted offers elsewhere. Seven deals closed. In all seven cases, our findings reshaped either the price or the post-close integration plan.
This is the checklist we use now. If you're acquiring a software company, this is the bar.
The Team Test: Who Are You Actually Buying?
In most software acquisitions, the team is at least half of what's being bought. A pristine codebase abandoned by its authors is worth less than a workable codebase whose creators stay on for the integration.
We probe:
- Bus factor: How many engineers would need to leave before critical parts of the system became unmaintainable? In one DD, that number was one. The acquirer renegotiated the founder's retention package and added 18-month performance-based earn-outs for the two other senior engineers.
- Key-person risk: Which specific subsystems live in one person's head? We ask senior engineers directly, then cross-reference Git blame against the previous 24 months of meaningful commits.
- Retention signals: We ask every engineer interviewed whether they'd stay through a transaction. Anonymised. The signal is more honest than any survey HR has run.
- Hiring track record: What's the team's record on hiring senior engineers in the last two years? A company that hasn't successfully hired a senior in 18 months is signalling either a culture or a compensation problem.
The Code Test: Is It Maintainable?
The code matters less than you'd think for valuation, but enormously for the cost of the next three years of ownership. We measure four things:
- Test coverage and quality: Coverage percentage is necessary but not sufficient — we look at what the tests actually exercise. A 70% line-coverage suite that never tests error paths is significantly worse than a 45% suite with focused integration tests on critical flows.
- Modularity: Can a feature be modified without touching unrelated parts of the codebase? We pick three recent feature additions from the Git log and trace what they actually changed.
- Dead code: What percentage of the codebase is unreachable in production? Above 15% is a sign that the team has stopped trusting their own deletion judgment.
- Dependency hygiene: How current are the major dependencies? Are there any unpatched CVEs older than 12 months? Are critical libraries on supported versions?
We always read the deployment pipeline as carefully as the application code. A team that has invested in a clean, fast CI/CD pipeline reveals far more about its engineering culture than its README.
The Data Test: What Are You Inheriting?
Data is the most underrated DD surface. You're not just acquiring a codebase — you're acquiring every customer record, every backup, every analytics export, and every legal obligation attached to them.
- PII inventory: What personal data does the system collect, where is it stored, how is it encrypted, and how long is it retained? We've seen acquisitions where the target had no inventory at all, meaning the acquirer effectively couldn't respond to a single GDPR access request post-close.
- GDPR / CCPA posture: Are data processing agreements in place with sub-processors? Is there a documented Data Protection Impact Assessment for high-risk processing? Are deletion workflows actually functional?
- Data lineage: Can the team explain how a given piece of data flows from collection to storage to analytics to deletion? If not, you can't make compliance commitments to your customers post-close.
- Backup integrity: When was the last full restore test? Backups that have never been restored are not backups, they are wishes.
The Infra Test: What's the Cost Trajectory?
Cloud bills lie about the future. We look for the slope, not just the level.
- Cost per customer/transaction over 24 months: Is it improving, flat, or degrading? A worsening curve at the target's current scale becomes a much bigger problem at the acquirer's intended scale.
- Single-vendor lock-in: How much of the system depends on proprietary services (Aurora, DynamoDB, BigQuery, Snowflake)? Lock-in isn't always bad — but it should be a conscious choice, and the acquirer should know the migration cost if forced.
- Reserved capacity vs on-demand: What percentage of compute is on reserved instances or savings plans? A team with 0% reserved is leaving 30–50% of compute spend on the table — and that's free money for the post-close integration plan.
- Multi-region readiness: Is the system architected to run in a second region with reasonable effort? Even if you don't need it today, single-region systems significantly limit certain customer segments and certain regulatory regimes.
The Security Test: What's Walking In With You?
You inherit every vulnerability, every exposed secret, every misconfigured permission. Security DD isn't optional.
- Open vulnerabilities: SAST and SCA scans run against the codebase. Anything critical or high that's older than 90 days requires explanation.
- Secret hygiene: A truffleHog/Gitleaks scan over the entire Git history. Real credentials in history (even old ones) are an immediate finding — they were potentially exposed to anyone who cloned the repo.
- Audit log integrity: Are administrative actions logged? Can logs be tampered with? Is there a real incident response plan, or is it a Confluence page nobody has read?
- Penetration test history: Has the system been pen-tested in the last 12 months? By whom? Where's the report? What was remediated?
Commercial-Technical Alignment
The most expensive DD finding is when the technology doesn't actually support the commercial model the acquirer is pricing into the deal.
One acquisition we worked on: the target sold the product as multi-tenant SaaS. The underlying database was actually a single-tenant schema-per-customer model, with 800 schemas in one database. Adding a customer required running migrations in 800 places. The acquirer's expansion plan assumed onboarding could scale from the existing 12 customers a month to 200 a month — completely incompatible with the actual architecture. We found this in week two. The price was renegotiated by 22% to fund the multi-tenancy rebuild that would otherwise have hit the acquirer's P&L unannounced.
A Sample DD Checklist
| Area | Key Question | Artefact Required | Red Flag |
|---|---|---|---|
| Team | Who are the 3 people the system can't lose? | Engineer interviews + Git blame | Bus factor of 1 on a critical subsystem |
| Code | What's the test coverage on payment / auth flows? | Coverage report + test review | Below 50% on critical paths |
| Data | Where is PII stored and how is it deleted? | Data inventory + DPIA | No inventory exists |
| Infra | What's cost per customer over 24 months? | Cloud bills + customer count | Degrading slope |
| Security | How old are open critical/high CVEs? | SAST + SCA scan results | Any unresolved > 90 days |
| Compliance | Are sub-processor DPAs in place? | DPA register + supplier list | Missing DPAs for customer data sub-processors |
| Commercial | Does the architecture support the sales model? | Architecture diagram + customer model | Single-tenant code sold as multi-tenant |
The Red Flags That Killed Deals
Three patterns where we recommended walking away — and the acquirer did:
- Unrecoverable legacy: A 14-year-old PHP 5.6 codebase with no migration plan and the original author retired. Maintenance costs were already half the team's capacity. The acquirer was buying a depreciating asset disguised as a SaaS company.
- Compliance gap: A B2C product handling card data with no PCI DSS scope reduction, raw PANs in plaintext logs going back four years. The clean-up alone would have triggered breach disclosure requirements in three jurisdictions.
- Hidden churn: Customer churn looked acceptable at the cohort level, but engineering interviews revealed the support team was running constant manual recovery workflows for a data corruption bug nobody had time to fix. Real churn would have been 3× reported once that workflow stopped.
Key Takeaways
- Technical DD is not a code review — it spans team, code, data, infrastructure, security, and commercial alignment.
- The team test is often the highest-leverage finding; bus factor and retention signals shape the deal structure.
- Data inventory and GDPR posture are the most commonly missed surfaces and the most expensive to fix post-close.
- Cost trajectory matters more than current cost — what's the slope at the acquirer's intended scale?
- Security findings older than 90 days are an engineering culture signal, not just a technical issue.
- Commercial-technical misalignment is the single most expensive DD finding when missed.