Skip to content

Common API Surface (DaasAPI)

The DaaS-IoT Common API Surface is defined by the DaasAPI class exposed by libdaas. All platform SDKs and language bindings (C++, Python, Qt, Java) wrap or mirror this interface.

This page describes the conceptual surface of DaasAPI — that is, the operations that every SDK is expected to provide in some idiomatic form.

Note

The exact naming and signatures may vary per language (e.g. methods vs functions, exceptions vs error codes), but the semantics and behavior are shared across all SDKs.


1. Runtime & Construction

At the heart of every DaaS-IoT node there is a runtime instance that manages:

  • local node identity (SID, DIN)
  • enabled drivers and overlay links
  • discovery and mapping
  • time synchronization (dATS/ATS)
  • data exchange (DDO push/pull)
  • monitoring and performance probes (Frisbees)

On C++/native, this role is implemented by the DaasAPI class.

1.1 Event integration

The constructor binds the runtime to an event handler implementation:

  • An IDaasApiEvent interface receives:
  • node discovery events
  • incoming DDO notifications
  • time sync completion
  • Frisbee results, etc.

Each SDK exposes its own idiomatic way to attach this handler (callback interface, abstract class, or similar).

1.2 Runtime metadata

The core API exposes basic runtime information such as:

  • Available drivers
    Query the list of communication technologies supported by the current build (e.g. INET4, UART, Bluetooth).

  • Version & build info
    Retrieve the libdaas version and build metadata used by the SDK.

This is useful for diagnostics, feature gating, and compatibility checks.


2. Node Lifecycle

DaaS-IoT nodes follow a simple lifecycle managed through DaasAPI.

2.1 Initialization

The core initialization call:

  • configures the local SID (overlay domain)
  • assigns the local DIN (node identifier)
  • allocates internal resources

Typical flow:

  1. Construct the runtime and attach an event handler.
  2. Call the init routine with (SID, DIN) of the local node.
  3. Configure drivers (see next section).
  4. Start the core loop / engine.

If init fails, the node is considered inactive and no overlay operation must be attempted.

2.2 Core loop (doPerform)

After initialization, the node must run its core engine:

  • Threaded mode
    The core loop runs in an internal thread; user code just calls into the API.

  • Non-threaded (real-time) mode
    The application explicitly drives the core loop, suitable for tight RT environments or where the main loop is controlled externally.

Internally this loop handles:

  • discovery beacons
  • ATS/dATS synchronization messages
  • DDO delivery and event dispatching
  • link management

2.3 Reset & shutdown

The lifecycle includes:

  • Reset
    Clears the local state and mappings but keeps the runtime usable.

  • End
    Releases resources and deactivates the node. After this, a new init is required before the node can participate in the overlay again.

A typical service:

  • init once at startup
  • run doPerform for the entire lifetime
  • call end/reset only on reconfiguration or controlled shutdown

2.4 Local node status

The API exposes a local node status object summarizing:

  • hardware / firmware version
  • enabled links
  • synchronization status
  • security policy (lock/keys)
  • capability flags

Bindings map this to an idiomatic type (struct, dataclass, DTO, …).


Drivers represent the underlying communication technologies used by the overlay: for example INET4, UART, MQTT5, Bluetooth.

3.1 Enabling a driver

A standard pattern exists:

  • Specify the driver ID (e.g. _LINK_INET4)
  • Provide a local URI (e.g. 192.168.1.10:2020)

If successful, the link becomes available for:

  • discovery beacons
  • node mapping
  • data exchange (DDO push/pull, RT sessions, Frisbees)

Each SDK should expose a method to:

  • list drivers supported on this build
  • enable one or more drivers with a technology-specific URI

3.2 Mapping remote nodes

Once drivers are enabled, the runtime can map remote DINs:

  • automatic mapping via Discovery
  • explicit mapping by providing:
  • remote DIN
  • driver/link
  • remote URI
  • optional security key

Mapped nodes are tracked internally and can be enumerated via a “list known nodes” call.

3.3 Removing nodes

The API provides a way to:

  • remove obsolete or unreachable DINs from the local table
  • keep the mapping table compact and relevant

This is orthogonal to discovery: even if a node is removed manually, discovery may reintroduce it if it is still active on the network.


4. Discovery & Topology

Discovery is the process by which nodes announce themselves and learn about other nodes in the same SID.

The common API surface provides:

  • a discovery trigger that:
  • broadcasts enabled driver URIs
  • allows remote nodes to auto-map the current node
  • can run across all enabled links or on a specific driver

  • a locate operation that:

  • actively searches for a specific DIN
  • optionally waits up to a configurable timeout

Bindings translate these into:

  • discover() / locate() calls
  • event callbacks via IDaasApiEvent (or language-specific equivalent)

For conceptual details, see:


5. Time Synchronization (ATS / dATS)

To support time-synchronized overlays, the common API exposes:

  • Synchronized timestamp
    Returns the current time corrected by the ATS/dATS layer. All time-critical operations (logging, latency estimation, ordering) must rely on this value.

  • Network/node sync helpers
    Operations that:

  • propagate local system time to remote nodes
  • synchronize with a specific DIN
  • or synchronize a group/network subject to a maximum permitted error

  • ATS error bound
    A configurable threshold specifying the maximum acceptable synchronization error (in milliseconds). This value influences when synchronization is considered “good enough”.

The exact underlying protocol (ATS, RoATS, dATS) is described in:


6. Data Plane — DDO Push/Pull & Typesets

The data plane is centered around DDOs (Data Delivery Objects) and typesets, which are part of the common API model.

6.1 Typesets

The API provides:

  • a registry of user-defined typesets
  • operations to:
  • add a new typeset (code + nominal payload size)
  • list registered typesets

This is the cross-SDK mechanism that labels each DDO with its semantic type. See:

6.2 Push/Pull operations

The core primitives are:

  • Push
    Send a DDO to a remote DIN:
  • you create/populate a DDO
  • set the typeset
  • push it to the target DIN

  • Pull
    Retrieve DDOs pending from a remote DIN:

  • query how many DDOs are available
  • pull them one by one
  • inspect typeset, origin, timestamp, payload

  • Available count
    A helper to check how many DDOs can be pulled before performing a full pull.

Bindings expose these as high-level methods working with language-native objects while preserving the underlying semantics.


7. Real-Time Sessions

In addition to message-oriented DDO exchange, the common API surface supports real-time sessions between two DINs.

Typical flow:

  1. Start session
    Establish a RT session with a remote DIN.

  2. Send data
    Transmit raw binary buffers on that RT channel.

  3. Check availability
    Check if data is available on the RT channel.

  4. Receive data
    Read inbound data up to a maximum size.

  5. End session
    Close the RT session with the remote DIN.

RT sessions are independent from DDO push/pull and are typically used for time-sensitive or streaming-style payloads where packet framing is managed by the application.


8. Node State, Security & Locking

The common API exposes a node state abstraction that captures:

  • last activity timestamps
  • hardware profile
  • link flags
  • synchronization status
  • security configuration
  • basic capabilities and counters

Key operations include:

  • Fetch remote node state
    Query and locally update the status of a remote DIN.

  • Lock/unlock
    Configure security parameters (e.g. security key, policy) for the local node or for remote nodes, depending on the operation.

  • Send status
    Push the local node status to a remote DIN.

These capabilities are used both for monitoring purposes and for enforcing security and access policies at overlay level.

For security implications and patterns, see:


9. Statistics & Diagnostics

To support observability and diagnostics, the common API surface includes:

  • Statistics reset
    Clear local statistics counters.

  • Statistics query
    Query counters by code (e.g. bytes sent, packets handled, DDO-related metrics). The exact set of codes is defined by the SDK and may be extended across versions.

These features allow you to build:

  • monitoring dashboards
  • performance benchmarks
  • regression tests for new releases

10. Frisbees & Performance Probes

Frisbees are lightweight system messages used as heartbeat and performance probes between nodes.

The common API surface provides:

  • a generic Frisbee ping:
  • check availability of a remote DIN
  • measure latency at overlay level

  • a Frisbee ICMP-style ping with:

  • configurable timeout
  • retry policy

  • a Frisbee DPERF test:

  • send a configurable number of blocks of a given size
  • measure throughput and timing
  • retrieve a result structure with counters and timestamps

Conceptually, Frisbees underpin:

  • connectivity checks
  • latency estimation
  • performance profiling across links and topologies

For the conceptual model, see:


11. Configuration Persistence

The runtime can store and reload its configuration through an abstract persistence interface.

In the common API surface:

  • a storage interface (e.g. IDepot in C++) is provided by the user
  • the runtime can:
  • store configuration through this interface
  • load configuration on startup, reusing previously persisted settings

Typical use cases:

  • embedded devices with flash storage
  • gateways that must restore mappings and driver configuration on reboot
  • devices provisioned once and then deployed in the field

Bindings map the storage interface to:

  • file-based backends
  • key-value stores
  • platform-specific persistence mechanisms

12. Error Handling Model

The Common API Surface uses two families of return values:

  • Error codes
    Operations that can fail for protocol or environment reasons commonly return an error enumeration (e.g. success vs. specific error conditions).

  • Boolean / size returns
    Some operations return:

  • a boolean indicating success vs failure
  • a size value for data sent/received

Language bindings map this to:

  • exceptions (raised on non-success codes)
  • result objects
  • idiomatic error handling patterns.

13. How SDKs Map This Surface

Each SDK wraps DaasAPI and exposes an idiomatic interface:

  • Linux / Windows / macOS C++ SDKs
    Direct C++ wrappers exposing the same class and methods, plus thin helpers.

  • Python SDK
    Classes and functions mirroring the same lifecycle and operations, with Pythonic conventions (context managers, exceptions, etc.).

  • Qt C++ SDK
    A Qt-style wrapper integrating with the event loop and signal/slot model.

  • Java SDK
    Java classes and method sets that closely resemble the DaasAPI interface while following Java naming and packaging conventions.

From this page, you can proceed to: