Announcing iceoryx2 v0.5.0

Christian Eltzschig - 23/12/2024

iceoryx2 rust cpp c

Header

What Is iceoryx2

iceoryx2 is a service-based inter-process communication (IPC) library designed to build robust and efficient decentralized systems. It enables ultra low-latency communication between processes — similar to Unix domain sockets or message queues, but significantly faster and easier to use.

Check out the iceoryx2 benchmarks and try them out on your platform!

benchmark

It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.

Release v0.5.0

The iceoryx2 v0.5.0 release, introduces several features to improve the user experience, including support for dynamic payloads, allowing publishers to handle variable payload sizes without the need to predefine maximum payload sizes. It also introduces health monitoring informing users when a process crashes in a decentralized system. The new WaitSet enables efficient event management by waiting on multiple events like listeners or sockets in a single call. Furthermore, it comes with advanced features such as deadlines and intervals to manage timing constraints. This makes a decentralized architecture more robust by allowing proactive handling of missed notifications and failed processes.

Feature Deep-Dive

Dynamic Payload Memory

One highly requested feature was the ability for publishers to handle dynamic payloads. This means you no longer need to know the payload size in advance, and the publisher can reallocate memory as needed. While this might seem trivial, it’s quite challenging in the context of shared-memory-based communication.

The dynamic payload works in combination with the slice API by simply defining an AllocationStrategy when creating a new publisher, and iceoryx2 automatically handles memory reallocation for you.

Here’s an example:

let service = node
    .service_builder(&"Service With Dynamic Data".try_into()?)
    .publish_subscribe::<[u8]>()
    .open_or_create()?;

let publisher = service
    .publisher_builder()
    // We guess that the samples are at most 16 bytes in size.
    // This is just a hint to the underlying allocator and is purely optional.
    // A better guess minimizes reallocations.
    .initial_max_slice_len(16)
    // The underlying sample size will increase using a power-of-two strategy
    // when [`Publisher::loan_slice()`] or [`Publisher::loan_slice_uninit()`]
    // require more memory than currently available.
    .allocation_strategy(AllocationStrategy::PowerOfTwo)
    .create()?;

Checkout publish subscribe with dynamic data example for more details.

Health Monitoring

What happens when a critical process crashes, or a service disappears? Wouldn't it be great to have a mechanism that informs you immediately, instead of leaving you waiting indefinitely?

Health monitoring provides such a mechanism, ensuring you're kept informed about key system events like the appearance or disappearance of services, or the crash of critical processes. In iceoryx2, this is managed in a decentralized manner using Notifier ports.

The Notifier is now configurable to emit a "dead notification" when a process is identified as being down. Additionally, the new WaitSet has been introduced as a key component for efficiently managing notifications and events.

  • You can iterate over all nodes, and if a node is in a dead state (e.g., its owning process crashed), you can explicitly clean up stale nodes and inform other processes.
  • iceoryx2 also performs automatic dead-node checks during critical operations that might impact your system.
  • You can configure this behavior via global.node.cleanup-dead-nodes-on-creation and global.node.cleanup-dead-nodes-on-destruction in the iceoryx2 configuration.

Here’s an example implementation:

Node::<ipc::Service>::list(Config::global_config(), |node_state| {
    if let NodeState::Dead(state) = node_state {
        println!(
            "Detected dead node: {:?}",
            state.details().as_ref().map(|v| v.name())
        );
        state.remove_stale_resources().expect("");
    }

    CallbackProgression::Continue
})?;

You can also configure a Notifier to emit a signal as soon as it is identified as dead. This occurs when another process calls [DeadNode::remove_stale_resources()]. In such cases, the corresponding Listener will be woken up with a predefined EventId.

Here’s an example implementation:

let service_event = node
    .service_builder(&"MyEventName".try_into()?)
    .event()
    .notifier_created_event(EventId::new(1))
    .notifier_dropped_event(EventId::new(2))
    .notifier_dead_event(EventId::new(3))
    .open_or_create()?;

let listener = service_event.listener_builder().create()?;

if let Ok(Some(event_id)) = listener.blocking_wait_one(CYCLE_TIME) {
    match event_id {
        EventId::new(1) => println!("A new notifier was created."),
        EventId::new(2) => println!("A notifier was dropped."),
        EventId::new(3) => println!("A notifier was identified as dead."),
        _ => println!("An unknown event occurred: {:?}", event_id),
    }
}

Checkout health monitoring example for more details.

Details of the Key Concepts

  1. Notifier Events:

    • notifier_created_event: Triggered when a new Notifier port is created.
    • notifier_dropped_event: Triggered when a Notifier port is dropped.
    • notifier_dead_event: Triggered when a Notifier owned by a dead node is detected, and all its stale resources are successfully removed.
  2. Listener: The Listener listens for events emitted by the Notifier. When an event occurs, the listener wakes up and identifies the event based on the associated EventId.

  3. Dead Node Handling: The notifier_dead_event ensures the system can respond effectively to crashes by notifying the corresponding listeners.

WaitSet

The WaitSet is iceoryx2’s new event multiplexer, allowing you to wait on multiple events — such as iceoryx2 listeners or sockets — in a single call. Whether you’re waiting for incoming messages from a third-party network stack or iceoryx2 notifications, the WaitSet has you covered.

It also includes advanced features like deadlines and intervals:

  • Deadlines: Set timing constraints for incoming messages. If a message isn’t received within the defined timeframe, the WaitSet wakes you up and informs you which deadline was hit.
  • Intervals: Configure the WaitSet to wake you up at regular intervals.

And another example:

let waitset = WaitSetBuilder::new().create::<ipc::Service>()?;

// Attach intervals
let interval_1_guard = waitset.attach_interval(Duration::from_secs(2))?;
let interval_2_guard = waitset.attach_interval(Duration::from_secs(3))?;

// Attach a deadline and a notification
let deadline_guard = waitset.attach_deadline(my_listener, Duration::from_secs(1))?;
let notification_guard = waitset.attach_notification(another_listener)?;

// Define the callback to handle events
let on_event = |attachment_id: WaitSetAttachmentId<ipc::Service>| {
    if attachment_id.has_event_from(interval_1_guard) {
        // Action for events every 2 seconds
    } else if attachment_id.has_event_from(interval_2_guard) {
        // Action for events every 3 seconds
    } else if attachment_id.has_event_from(deadline_guard) {
        // Received data within the 1-second deadline
    } else if attachment_id.has_missed_deadline(deadline_guard) {
        // No data received within the 1-second deadline
    }
    CallbackProgression::Continue
};

// Wait and process events
waitset.wait_and_process(on_event)?;

You can also attach anything that implements SynchronousMultiplexing. For example, if you have a socket or file descriptor that works with C calls like epoll or select, you simply need to implement the trait, and it can then be used with WaitSet::attach_deadline() or WaitSet::attach_notification().

Checkout event multiplexing example and event based communication example for more details.

rmw_iceoryx2

We have released the first version of the iceoryx2 RMW ROS 2 binding, allowing iceoryx2 to be used as a transport layer for ROS 2. This provides a significant reduction in latency while lowering the CPU and memory load on your system.

Another advantage is that it offers a pathway to transition from a ROS 2 system to a safety-certified system. The iceoryx2 layer will soon be certifiable and capable of communicating with all ROS 2 nodes. This allows you to port applications to the iceoryx2 API one by one, as needed for certification, while maintaining communication with the rest of the system. This approach allows a smooth and incremental transition from proof-of-concept to production-ready systems.

What’s Next?

Check out our Roadmap to stay updated on our plans and progress.

For the next release, we plan to focus on:

  • Request/Response
  • Python Language Binding
  • Gateways: We start with zenoh, and then other protocols like MQTT or DDS will follow.
  • OS Support: QNX (maybe Android & iOS support in sandbox mode)
  • Expanded Documentation: Recognizing the complexities of inter-process communication, we are enhancing our documentation to:
    • Provide a gentle introduction for new users,
    • Explain iceoryx2’s features in detail,
    • Offer step-by-step tutorials to help you make the most of the library.