The Decomposition: From Monolith to Microservices in a Sales Organization
Alex Rampell at a16z wrote an investment memo about Fly Homes that decomposed a real estate transaction into its component services. I read it and thought: he's describing what I already built. But he stopped at the first level. I went three levels deeper.
Level zero is the monolith. One sales agent does everything — qualifies the lead, searches inventory, books tours, collects documents, submits applications, follows up. This is how every brokerage in America works. It's also why every brokerage in America has the same margin problem: the human is the bottleneck and the cost center simultaneously.
Level one is special teams. Decompose the sales agent into functions: lead qualification, inventory matching, tour booking, document collection, application processing, follow-up. Each function is a team. Each team can be measured independently. This is where Rampell stopped. This is where most organizations stop because it's already hard enough to manage.
Level two is atomic tasks. Decompose each function into its smallest meaningful unit of work: parse intent, check budget, query database, rank results, draft message, check compliance, send, log. Each task is a ticket. Each ticket has an input, an output, a cost, and a success metric. This is where the architecture starts looking like Amazon's internal services.
Level three is replaceable executors. Each atomic task can be performed by any executor that satisfies the interface: an LLM at $0.003/task, a voice AI at $0.09/minute, a human at $3-7/task, or deterministic code at $0/task. The executor is interchangeable. The interface is sacred.
# From the actual production code — Ken Insurance Agent
class KenAgent:
"""
Handles:
- Initial outreach to new leads
- Response classification and handling
- Info gathering (address, move-in date, email)
- Quote delivery (only after Assurant API call)
- Follow-up sequencing
- Objection handling
"""
def handle_inbound(self, lead, message):
"""
Returns: (response, event, action_needed)
action_needed can be:
- {"action": "generate_quote", ...}
- {"action": "lookup_property", ...}
- {"action": "escalate_to_human", ...}
- None
"""
classification = classify_message(message)
self._extract_statements(lead, message)
response, event, action = \
self._handle_intent(lead, classification, message)
if event:
self.state_machine.process_event(lead, event)
return response, event, action
Look at that interface. handle_inbound takes a lead and a message. It returns a response, an event, and an optional action. The action might be "generate a quote" or "escalate to a human." The caller doesn't know or care whether the response was generated by an LLM, a template, or a human agent. The interface is the contract. The executor is swappable.
Four principles govern the whole thing:
-
Modularization: Every function is a service. The locator agent, the insurance agent, the landlord rep agent, the financial analysis pipeline — four business units, four independent services, shared infrastructure.
-
Replaceability: Any executor can be swapped without changing the system. When I switched from GPT-4 to Claude for the insurance agent, exactly zero business logic changed. The interface stayed the same. The executor swapped out.
-
Sanctity: Each module owns its data and its contracts. The insurance agent doesn't touch the locator database. The locator agent doesn't know insurance exists. Clean boundaries.
-
Operating leverage: Cost per task approaches zero as volume scales. The 30,000th conversation costs the same fractions of a cent as the first.
This is the same architecture that makes Amazon work at planetary scale. The difference is that Amazon needed millions of dollars and thousands of engineers to build each service. I needed a new prompt and a new API integration.
Ben Thompson would call this the aggregation theory applied to a sales organization. The platform aggregates demand (leads), owns the customer relationship (the conversation), and modularizes supply (buildings, insurance, services). The advantage accrues to the aggregator — the one who owns the conversation interface and can plug in any supplier behind it.
The benefits are the same as software microservices: independent deployability, fault isolation, team autonomy, measurability. The challenges are also the same: coordination overhead, data consistency, the temptation to over-decompose. But the payoff — being able to replace a $7 human task with a $0.003 machine task without touching anything else in the system — that's the entire game.