Airtable occupies an unusual position in the stack: it's the only product where business users will happily model their own data, and engineers will happily query it from production code. That dual-audience property is rare and genuinely valuable. It's also the thing that gets teams in trouble — because the spreadsheet-shaped interface that makes Airtable easy for business users hides the fact that it is, underneath, a constrained API-driven database.
This post is the honest version of the Airtable conversation we have with clients. Where it earns its place. Where it doesn't. And the engineering signals that tell you it's time to migrate.
What Airtable Actually Is, From an API Perspective
Strip the UI away and you have:
- A REST API with rate limits of 5 requests per second per base (not per workspace, not per user — per base).
- A practical comfortable limit of around 50,000 records per base. Officially higher tiers allow up to 500k, but performance degrades meaningfully past ~100k.
- No transactions. Writes are per-record; you cannot atomically update multiple records.
- No foreign keys in the database sense — "linked records" are arrays of record IDs, validated client-side.
- Schema changes are a UI operation, not a migration. They can break automations and integrations silently.
Those constraints aren't bugs — they're the price of having business users self-serve. But they shape every decision about whether Airtable belongs in a particular use case.
Where Airtable Genuinely Shines
We use Airtable in production happily for three patterns:
Operational dashboards for internal teams. The marketplace operations team that needs to triage 200 daily applications, assign reviewers, track status, and generate weekly reports — that's an Airtable use case. The team can adjust their own workflow without engineering tickets. The data volume stays well under limits. The 5 req/sec ceiling is invisible because it's humans clicking, not services polling.
Editorial and content workflows. The marketing team that wants to plan campaigns, track content production status, and have writers update their own drafts — also Airtable. Linked records to a "Campaigns" table give them the structure they want without anyone writing a migration.
Configuration data for low-traffic services. Feature flags, partner configurations, lookup tables that change weekly and need to be edited by non-engineers. A service can read this once at startup or on a 5-minute cache and never hit the rate limits.
Where Airtable Gets You Hurt
The failure modes are predictable. We've seen each of these multiple times:
- Customer-facing reads. "Just hit Airtable for the product catalogue" works until the third concurrent request, when you blow the rate limit and 80% of users see a loading spinner.
- Webhook hot paths. Stripe webhook arrives, code tries to log it to Airtable, Airtable is at 5 req/sec, the webhook handler times out, Stripe retries, and now you've got duplicate-event headaches on top of the original problem.
- Anything requiring transactions. Subscription state changes that need to update three related records atomically. Airtable cannot do this. You'll write race conditions you can't fix.
- Past 100k rows. Filtering and sorting via the API gets slow and unreliable. Views that worked fine at 30k records start returning incomplete results at 150k.
- Schema-coupled integrations. A business user renames a field. Six automations and one production service break silently. By the time you find out, you've lost a day of data.
The Right Way to Read From Airtable
When Airtable is genuinely the right tool, the integration code matters. pyairtable is the maintained Python client and handles pagination cleanly. Rate limiting is your responsibility — the library will not stop you from blowing through 5 req/sec.
import time
from collections import deque
from pyairtable import Api
from pyairtable.formulas import match
class RateLimitedAirtable:
"""Wraps pyairtable with a token-bucket rate limiter (4 req/sec to leave headroom)."""
def __init__(self, api_key: str, base_id: str, table_name: str, rps: float = 4.0):
self.table = Api(api_key).table(base_id, table_name)
self.interval = 1.0 / rps
self._timestamps: deque[float] = deque(maxlen=10)
def _throttle(self) -> None:
now = time.monotonic()
if self._timestamps and (now - self._timestamps[-1]) < self.interval:
time.sleep(self.interval - (now - self._timestamps[-1]))
self._timestamps.append(time.monotonic())
def all_records(self, view: str | None = None) -> list[dict]:
"""Yields all records across all pages, respecting rate limits."""
records: list[dict] = []
for page in self.table.iterate(view=view, page_size=100):
self._throttle()
records.extend(page)
return records
def find_by(self, **fields) -> dict | None:
self._throttle()
results = self.table.all(formula=match(fields), max_records=1)
return results[0] if results else None
def upsert(self, key_field: str, key_value: str, fields: dict) -> dict:
"""Idempotent write keyed on a unique field."""
existing = self.find_by(**{key_field: key_value})
self._throttle()
if existing:
return self.table.update(existing["id"], fields)
return self.table.create({**fields, key_field: key_value})
Three things this gives you that the naive approach doesn't: explicit rate limiting that won't burst past the ceiling, idempotent upserts keyed on a business field (Airtable's autoid recXXX is not stable across restores), and pagination that handles bases of any size.
The Postgres Mirror Pattern
When Airtable is right for the editing experience but wrong for read traffic, the pattern we use is a one-way mirror from Airtable to Postgres. Business users edit in Airtable. A sync job (every 60 seconds, or driven by Airtable webhooks where available) pushes changes to Postgres. Production reads happen against Postgres at any RPS.
Write paths get more complex — services need to write back through the Airtable API and then wait for the next sync — but for read-heavy workloads where business users own the data model, this gives you the best of both worlds: Airtable's edit experience, Postgres's performance characteristics.
Signals That It's Time to Migrate Off Entirely
We tell clients to start planning a migration off Airtable when any two of these are true:
- Record count in any one table is approaching 100k and growing.
- You've added more than two services that depend on the same base — coordination cost is now real.
- Schema changes are causing production incidents more than once a quarter.
- You need transactions, foreign key constraints, or partial-update guarantees that Airtable cannot provide.
- The 5 req/sec ceiling is showing up in your error budgets.
- The business team no longer self-edits — engineering owns all changes through the API anyway.
That last point is the most important. The entire value proposition of Airtable is business-user editing. If engineering ends up owning the schema and writes through the API, you've been paying the costs of Airtable without getting the benefit. Migrate to Postgres and a proper admin UI (Retool, Filament, Forest Admin) and you'll be happier in six months.
Key Takeaways
- Airtable's real constraints: 5 req/sec per base, ~50k comfortable record limit, no transactions, schema-as-UI.
- Use it for internal operational tools, content workflows, and low-traffic configuration data.
- Do not put it on customer-facing read paths or webhook hot paths.
- If you must integrate, use
pyairtablewith explicit rate limiting and idempotent upserts. - The Postgres mirror pattern preserves business-user editing while giving production the read performance it needs.
- Migrate off when business users no longer self-edit — at that point you're paying the cost without the benefit.