Architecture overview¶
This document describes the high-level architecture of aiohomematic, focusing on the main components and how they interact at runtime. It is intended for contributors and integrators who want to understand data flow, responsibilities, and the boundaries between modules.
Terminology: For definitions of Homematic-specific terms (Backend, Interface, Device, Channel, Parameter) and Home Assistant terms (Integration vs App), see the Glossary.
Component Overview¶
graph TB
subgraph Consumer["Consumer (Home Assistant)"]
HA[Homematic IP Local Integration]
end
subgraph aiohomematic["aiohomematic Library"]
CU[CentralUnit]
subgraph Coordinators["Coordinators"]
CC[ClientCoordinator]
CfgC[ConfigurationCoordinator]
DC[DeviceCoordinator]
EC[EventCoordinator]
HC[HubCoordinator]
LC[LinkCoordinator]
end
subgraph Model["Model Layer"]
DEV[Device]
CH[Channel]
DP[DataPoint]
end
subgraph Client["Client Layer"]
IC[InterfaceClient]
subgraph Backends["Backends"]
CCUB[CcuBackend]
JSONB[JsonCcuBackend]
HGB[HomegearBackend]
end
end
subgraph Store["Store Layer"]
PERS[Persistent Caches]
DYN[Dynamic Caches]
VIS[Visibility Rules]
end
EB[EventBus]
end
subgraph Backend["Homematic Backend"]
CCU[CCU3 / OpenCCU / Homegear]
end
HA --> CU
CU --> CC
CU --> DC
CU --> EC
CU --> HC
CC --> IC
IC --> CCUB
IC --> JSONB
IC --> HGB
DC --> DEV
DEV --> CH
CH --> DP
CCUB --> CCU
JSONB --> CCU
HGB --> CCU
CU --> PERS
CU --> DYN
CU --> VIS
EC --> EB
DP -.-> EB Top-level components¶
- Central (aiohomematic/central): Orchestrates the whole system. Manages client lifecycles, creates devices and data points, runs a lightweight scheduler, exposes the local XML-RPC callback server for events, and provides a query facade over the runtime model and caches. The central is created via CentralConfig and realized by CentralUnit.
- Client (aiohomematic/client): Implements the protocol adapters to a Homematic backend (CCU, Homegear) using the Backend Strategy Pattern. The unified
InterfaceClientabstracts XML-RPC and JSON-RPC calls, maintains connection health, and translates high-level operations (get/set value, put/get paramset, list devices, system variables, programs) into backend requests. Backends:CcuBackend(CCU3/CCU2 via XML-RPC + JSON-RPC),JsonCcuBackend(CUxD/CCU-Jack via JSON-RPC only),HomegearBackend(Homegear/pydevccu via XML-RPC). Infrastructure components includeCircuitBreaker(prevents retry-storms),CommandRetryHandler(retries transient command failures with exponential backoff),CommandThrottle(rate-limiting and priority), andRequestCoalescer(deduplicates concurrent requests). A client belongs to one Interface (BidCos-RF, HmIP, etc.). - Model (aiohomematic/model): Turns device and channel descriptions into runtime objects: Device, Channel, DataPoints and Events. The model layer defines generic data point types (switch, number, sensor, select, …), hub objects for programs and system variables, custom composites for device-specific behavior, calculated data points for derived metrics, and combined data points for multi-parameter writable entities (e.g., timer value+unit pairs). The entry point create_data_points_and_events wires everything based on paramset descriptions and visibility rules.
- Store (aiohomematic/store): Provide persistence and fast lookup for device metadata and runtime values. Organized into subpackages:
- persistent/: DeviceDescriptionRegistry and ParamsetDescriptionRegistry store descriptions on disk between runs. IncidentStore persists diagnostic incidents for post-mortem analysis. SessionRecorder captures RPC sessions for testing.
- dynamic/: CentralDataCache, DeviceDetailsCache, CommandTracker, PingPongTracker hold in-memory runtime state and connection health. PingPongTracker includes a PingPongJournal for diagnostic events.
- visibility/: ParameterVisibilityRegistry applies rules to decide which paramsets/parameters are relevant and which are hidden/internal.
- types.py: Shared typed dataclasses (CachedCommand, PongTracker, PingPongJournal, IncidentSnapshot) for cache entries.
- serialization.py: Session recording utilities for freeze/unfreeze of parameters.
- Support (aiohomematic/support.py and helpers): Cross-cutting utilities: URI/header construction for XML-RPC, input validation, hashing, network helpers, conversion helpers, and small abstractions used across central and client. aiohomematic/async_support.py provides helpers for periodic tasks.
Dependency Injection Architecture¶
aiohomematic uses protocol-based dependency injection to reduce coupling and improve testability. The architecture follows a three-tier strategy — components at every layer receive only the protocol interfaces they actually need, never the whole CentralUnit:
- Tier 1 — Infrastructure layer: coordinators (
CacheCoordinator,DeviceRegistry,EventCoordinator,BackgroundScheduler, …) receive a small set of protocol interfaces via constructor injection. - Tier 2 — Coordinator layer: higher-level coordinators (
ClientCoordinator,HubCoordinator,Hub) compose other coordinators, again using only protocols (e.g.ClientFactoryProtocolfor client creation). - Tier 3 — Model layer:
Device,Channeland theDataPointhierarchy are constructed with protocol interfaces; channels access them through their parent device.
Benefits: complete decoupling from CentralUnit, protocol-based mocking in tests, and a clear dependency contract at every level.
Source of truth for protocols¶
All protocol interfaces — with full categorisation, member signatures, and sub-protocol composition — live in the module docstring of aiohomematic/interfaces/__init__.py. That module is the authoritative reference; this page deliberately does not duplicate the list so the two cannot drift.
Further reading:
- ADR 0002 — Protocol-Based Dependency Injection
- ADR 0003 — Explicit over Composite Protocol Injection
- ADR 0010 — Protocol Combination Analysis
- Protocol Selection Guide — decision trees, hierarchy diagrams, common patterns
Responsibilities and boundaries¶
- Central vs Client
- Central owns system composition: it creates and starts/stops clients per configured interface, starts the XML-RPC callback server, and maintains the runtime model and caches.
- Central implements all protocol interfaces and injects them into coordinators during construction.
- Client owns protocol details: it knows how to talk to the backend via XML-RPC or JSON-RPC, how to fetch lists and paramsets, and how to write values. Central should not embed protocol specifics; instead it calls client methods.
- Model vs Central/Client
- Model is pure domain representation plus transformation from paramset descriptions to concrete data points/events. It must not perform network I/O. It consumes metadata provided by Central/Client and exposes typed operations on DataPoints (which then delegate to the client for I/O through the device/channel back-reference).
- Model layer (Device, Channel, DataPoint) uses full dependency injection with protocol interfaces, achieving complete decoupling from CentralUnit.
- Coordinators
- All coordinators use full dependency injection with protocol interfaces.
- Infrastructure coordinators (CacheCoordinator, DeviceCoordinator, DeviceRegistry, EventCoordinator) receive only protocol interfaces.
- Factory coordinators (ClientCoordinator, HubCoordinator) use ClientFactoryProtocol and other protocol interfaces for all operations including object creation.
- Facade coordinators (ConfigurationCoordinator, LinkCoordinator) provide high-level operations for device configuration and link management, delegating to clients and the device registry via protocol interfaces.
Coordinator responsibility matrix¶
| Coordinator | Responsibility | Does NOT do |
|---|---|---|
| CacheCoordinator | Manages all persistent and dynamic caches (device descriptions, paramsets, data values, session recordings). Loads/saves to disk, clears on stop. | Does not fetch data from backends (clients do that). Does not create devices or data points. |
| ClientCoordinator | Manages client lifecycle (creation, initialization, connection, failure tracking) for each configured interface. | Does not manage client internal state machines. Does not perform backend operations (clients do that). |
| ConfigurationCoordinator | High-level facade for device configuration operations (paramset read/write, validation, parameter discovery). | Does not manage devices or channels. Does not track parameter change events. |
| ConnectionRecoveryCoordinator | Unified connection recovery: retry attempts, staged reconnection (TCP check, RPC check, warmup, reconnect, data load), central state transitions. | Does not manage individual client instances. Does not create devices. |
| DeviceCoordinator | Device discovery, creation, removal, and lifecycle operations including paramset consistency checks. | Does not manage caches directly (CacheCoordinator). Does not handle data point subscriptions (EventCoordinator). |
| EventCoordinator | Event subscriptions for data points and system variables, routes backend callbacks to EventBus, publishes typed lifecycle and trigger events. | Does not manage data points directly. Does not handle client connections. |
| HubCoordinator | Manages hub-level data points: programs, system variables, install mode, connectivity, metrics, service messages, alarm messages, inbox, system updates. | Does not manage device-level data points. Does not handle device discovery. |
| LinkCoordinator | High-level facade for device direct link management (listing, discovering linkable candidates, creating, removing, updating links). | Does not manage devices directly. Does not create channels. |
Coordinator interaction matrix¶
Cache Client Config Recovery Device Event Hub Link
Cache - - - x x x - -
Client x - - x x - x -
Configuration - x - - x - - -
ConnectionRecovery x x - - x - x -
Device x x - - - x - -
Event - x - - - - - -
Hub - - - - - x - -
Link - x - - x - - -
Legend: x = depends on (reads from or delegates to), - = no dependency.
- Caches
- Persistent caches are loaded/saved by Central during startup/shutdown and used by Clients to avoid redundant metadata fetches.
- Dynamic caches are updated by Clients and Central when values change, and consulted to answer quick queries or de-duplicate work.
- All cache classes use dependency injection to receive only required interfaces.
- Support
- Shared, stateless helpers. No long-lived state; safe to import anywhere.
Key runtime interactions¶
Startup/connection¶
- CentralConfig is created with central name, host, credentials, interface configs, and options.
- CentralConfig.create_central() builds a CentralUnit. CentralUnit._create_clients() creates one Client per enabled Interface.
- CentralUnit.start():
- Validates configuration and, if enabled, starts the local XML-RPC callback server (xml_rpc_server) so the backend can push events.
- Loads persistent caches (device/paramset descriptions) and initializes clients.
- Initializes the Hub (programs, system variables) and starts the BackgroundScheduler for periodic refresh and health checks.
Device discovery and model creation¶
- Client.list_devices() fetches device descriptions from the backend (or uses cached copies if valid).
- For new or changed devices, DeviceCoordinator._add_new_devices() instantiates Device and Channel objects and attaches paramset descriptions.
- For each channel, create_data_points_and_events() (model package) iterates over paramset descriptions, applies ParameterVisibilityRegistry rules, creates Events where appropriate, and instantiates DataPoints via the generic/custom/calculated factories.
- Central indexes DataPoints and Events for quick lookup and subscription management.
State read and write¶
- Reads
- Central or a consumer requests a value: Client.get_value(channel_address, paramset_key, parameter) performs the appropriate RPC call (XML-RPC or JSON-RPC) and returns a converted value (model.support.convert_value is used where necessary). Results may be stored in dynamic caches.
- Writes
- A consumer calls DataPoint.set_value(...), which delegates to the owning Device/Channel/Client. InterfaceClient.set_value() validates the value and sends the RPC write via the backend. Optionally the system waits for an event confirming the new value; otherwise the value may be written into a temporary cache and later reconciled.
Event handling and data point updates¶
- The backend pushes events to the local XML-RPC callback server (Central's xml_rpc_server). Each event carries interface_id, channel_address, parameter, and value.
- EventCoordinator.data_point_event(interface_id, channel_address, parameter, value) is invoked via decorators wiring. It looks up the target DataPoint by channel+parameter.
- The DataPoint's internal state is updated; events are published to subscribers via EventBus. Central updates last event timestamps and connection health.
- If events indicate new devices or configuration changes, Central may trigger scans to fetch updated descriptions and update the model accordingly.
JSON-RPC vs XML-RPC data flow¶
- XML-RPC
- Used primarily for event callbacks and many CCU operations. Client uses AioXmlRpcProxy to issue method calls to the backend. The local rpc_server exposes endpoints for the backend’s event callbacks.
- JSON-RPC
- Optional, when the backend provides a JSON API. InterfaceClient with CcuBackend or JsonCcuBackend routes some operations through JsonRpcAioHttpClient. Choice of backend per interface is encapsulated by the Backend Strategy.
Caching strategy¶
- Persistent caches (on disk)
- DeviceDescriptionRegistry and ParamsetDescriptionRegistry reduce cold-start time and load on the backend. Central decides when to refresh and when to trust cached data (based on age and configuration).
- IncidentStore persists diagnostic incidents (e.g., PING_PONG_MISMATCH_HIGH, PING_PONG_UNKNOWN_HIGH) for post-mortem analysis. Uses save-on-incident, load-on-demand strategy with automatic cleanup of old incidents.
- Dynamic caches (in memory)
- CentralDataCache holds recent values and metadata to accelerate lookups and avoid redundant conversions.
- CommandTracker and PingPongTracker support write-ack workflows and connection health checks.
- PingPongTracker includes a PingPongJournal ring buffer for tracking PING/PONG events and RTT statistics.
- DeviceDetailsCache stores supplementary per-device data fetched on demand.
- Visibility cache
- ParameterVisibilityRegistry determines which parameters are exposed as DataPoints/events, influenced by user un-ignore lists and marker rules.
Concurrency model¶
- Central runs an asyncio-based BackgroundScheduler that periodically:
- Checks connection health and reconnection needs.
- Refreshes hub data (programs/system variables) and firmware update information.
- Optionally polls devices for values where push is unavailable.
- I/O operations in Clients are fully async; long-running operations are awaited and protected by timeouts (see const.TIMEOUT) and command queues.
Extension points¶
- New device profiles: Add custom DataPoints under
model/custom/and register them viaDeviceProfileRegistry.register(). Seedocs/developer/extension_points.mdfor detailed instructions. - Calculated sensors: Implement in
model/calculated/and add to_CALCULATED_DATA_POINTSinmodel/calculated/__init__.py. - Combined data points: Implement in
model/combined/usingCombinedTimerFielddescriptors onCustomDataPointsubclasses. Seedocs/developer/extension_points.mdfor details. - Backends/interfaces: Implement a new Client subclass and corresponding protocol proxy to add support for another backend or transport.
Glossary (selected types)¶
- CentralUnit: The orchestrator instance created from CentralConfig.
- Client: Protocol adapter for a single interface towards CCU/Homegear.
- Device/Channel: Domain model reflecting backend device topology.
- DataPoint: Addressable parameter on a channel, with read/write and event capabilities.
- Event: Push-style notification mapped to selected parameters (e.g., button clicks, device errors).
- Hub: Program and System Variable data points provided by the backend itself.
Further reading¶
- Data flow details (XML-RPC/JSON-RPC, events, updates)
- Sequence diagrams (connect, discovery, propagation, state machines, health tracking, recovery)
- Event reference complete event type documentation
- Event-driven metrics metrics and observability architecture
Architectural Decision Records (ADRs)¶
All architectural decisions are documented as formal ADRs in the adr/ directory:
| ADR | Title | Status |
|---|---|---|
| 0001 | CircuitBreaker and CentralConnectionState Coexistence | Accepted |
| 0002 | Protocol-Based Dependency Injection | Accepted |
| 0003 | Explicit over Composite Protocol Injection | Accepted |
| 0004 | Thread-Based XML-RPC Server | Accepted |
| 0005 | Unbounded Parameter Visibility Cache | Accepted |
| 0006 | Event System Priorities and Batching | Accepted |
| 0007 | Device Slots Reduction via Composition | Rejected |
| 0008 | TaskGroup Migration | Deferred |
| 0009 | Interface Event Consolidation | Accepted |
| 0010 | Protocol Combination Analysis | Accepted |
| 0011 | Storage Abstraction | Accepted |
| 0012 | Async XML-RPC Server POC | Accepted |
| 0013 | Interface Client Backend Strategy | Accepted |
| 0013a | Implementation Status (satellite) | Accepted |
| 0014 | Retry Logic Removal | Accepted |
| 0015 | Description Normalization Concept | Accepted |
| 0016 | Paramset Description Patching | Accepted |
| 0017 | Startup Auth Error Handling | Accepted |
| 0018 | Contract Tests | Accepted |
| 0019 | Derived Binary Sensors | Accepted |
| 0020 | Command Throttling with Priority Queue and Optimistic Updates | Accepted |
| 0021 | Blind Command Processing Lock and Target Preservation | Accepted |
| 0022 | Unified Schedule Access via WeekProfileDataPoint | Accepted |
| 0023 | Paramset Consistency Checker | Implemented |
| 0024 | CCU Translation Extraction | Implemented |
Notes¶
- This is a high-level overview. For detailed API and exact behavior, consult the module docstrings and tests under tests/ which cover most features and edge cases.