Posts by Topic: c

Announcing iceoryx2 v0.5.0

Christian Eltzschig - 23/12/2024

Header

What Is iceoryx2

iceoryx2 is a service-based inter-process communication (IPC) library designed to build robust and efficient decentralized systems. It enables ultra low-latency communication between processes — similar to Unix domain sockets or message queues, but significantly faster and easier to use.

Check out the iceoryx2 benchmarks and try them out on your platform!

benchmark

It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.

Release v0.5.0

The iceoryx2 v0.5.0 release, introduces several features to improve the user experience, including support for dynamic payloads, allowing publishers to handle variable payload sizes without the need to predefine maximum payload sizes. It also introduces health monitoring informing users when a process crashes in a decentralized system. The new WaitSet enables efficient event management by waiting on multiple events like listeners or sockets in a single call. Furthermore, it comes with advanced features such as deadlines and intervals to manage timing constraints. This makes a decentralized architecture more robust by allowing proactive handling of missed notifications and failed processes.

Feature Deep-Dive

Dynamic Payload Memory

One highly requested feature was the ability for publishers to handle dynamic payloads. This means you no longer need to know the payload size in advance, and the publisher can reallocate memory as needed. While this might seem trivial, it’s quite challenging in the context of shared-memory-based communication.

The dynamic payload works in combination with the slice API by simply defining an AllocationStrategy when creating a new publisher, and iceoryx2 automatically handles memory reallocation for you.

Here’s an example:

let service = node
    .service_builder(&"Service With Dynamic Data".try_into()?)
    .publish_subscribe::<[u8]>()
    .open_or_create()?;

let publisher = service
    .publisher_builder()
    // We guess that the samples are at most 16 bytes in size.
    // This is just a hint to the underlying allocator and is purely optional.
    // A better guess minimizes reallocations.
    .initial_max_slice_len(16)
    // The underlying sample size will increase using a power-of-two strategy
    // when [`Publisher::loan_slice()`] or [`Publisher::loan_slice_uninit()`]
    // require more memory than currently available.
    .allocation_strategy(AllocationStrategy::PowerOfTwo)
    .create()?;

Checkout publish subscribe with dynamic data example for more details.

Health Monitoring

What happens when a critical process crashes, or a service disappears? Wouldn't it be great to have a mechanism that informs you immediately, instead of leaving you waiting indefinitely?

Health monitoring provides such a mechanism, ensuring you're kept informed about key system events like the appearance or disappearance of services, or the crash of critical processes. In iceoryx2, this is managed in a decentralized manner using Notifier ports.

The Notifier is now configurable to emit a "dead notification" when a process is identified as being down. Additionally, the new WaitSet has been introduced as a key component for efficiently managing notifications and events.

  • You can iterate over all nodes, and if a node is in a dead state (e.g., its owning process crashed), you can explicitly clean up stale nodes and inform other processes.
  • iceoryx2 also performs automatic dead-node checks during critical operations that might impact your system.
  • You can configure this behavior via global.node.cleanup-dead-nodes-on-creation and global.node.cleanup-dead-nodes-on-destruction in the iceoryx2 configuration.

Here’s an example implementation:

Node::<ipc::Service>::list(Config::global_config(), |node_state| {
    if let NodeState::Dead(state) = node_state {
        println!(
            "Detected dead node: {:?}",
            state.details().as_ref().map(|v| v.name())
        );
        state.remove_stale_resources().expect("");
    }

    CallbackProgression::Continue
})?;

You can also configure a Notifier to emit a signal as soon as it is identified as dead. This occurs when another process calls [DeadNode::remove_stale_resources()]. In such cases, the corresponding Listener will be woken up with a predefined EventId.

Here’s an example implementation:

let service_event = node
    .service_builder(&"MyEventName".try_into()?)
    .event()
    .notifier_created_event(EventId::new(1))
    .notifier_dropped_event(EventId::new(2))
    .notifier_dead_event(EventId::new(3))
    .open_or_create()?;

let listener = service_event.listener_builder().create()?;

if let Ok(Some(event_id)) = listener.blocking_wait_one(CYCLE_TIME) {
    match event_id {
        EventId::new(1) => println!("A new notifier was created."),
        EventId::new(2) => println!("A notifier was dropped."),
        EventId::new(3) => println!("A notifier was identified as dead."),
        _ => println!("An unknown event occurred: {:?}", event_id),
    }
}

Checkout health monitoring example for more details.

Details of the Key Concepts

  1. Notifier Events:

    • notifier_created_event: Triggered when a new Notifier port is created.
    • notifier_dropped_event: Triggered when a Notifier port is dropped.
    • notifier_dead_event: Triggered when a Notifier owned by a dead node is detected, and all its stale resources are successfully removed.
  2. Listener: The Listener listens for events emitted by the Notifier. When an event occurs, the listener wakes up and identifies the event based on the associated EventId.

  3. Dead Node Handling: The notifier_dead_event ensures the system can respond effectively to crashes by notifying the corresponding listeners.

WaitSet

The WaitSet is iceoryx2’s new event multiplexer, allowing you to wait on multiple events — such as iceoryx2 listeners or sockets — in a single call. Whether you’re waiting for incoming messages from a third-party network stack or iceoryx2 notifications, the WaitSet has you covered.

It also includes advanced features like deadlines and intervals:

  • Deadlines: Set timing constraints for incoming messages. If a message isn’t received within the defined timeframe, the WaitSet wakes you up and informs you which deadline was hit.
  • Intervals: Configure the WaitSet to wake you up at regular intervals.

And another example:

let waitset = WaitSetBuilder::new().create::<ipc::Service>()?;

// Attach intervals
let interval_1_guard = waitset.attach_interval(Duration::from_secs(2))?;
let interval_2_guard = waitset.attach_interval(Duration::from_secs(3))?;

// Attach a deadline and a notification
let deadline_guard = waitset.attach_deadline(my_listener, Duration::from_secs(1))?;
let notification_guard = waitset.attach_notification(another_listener)?;

// Define the callback to handle events
let on_event = |attachment_id: WaitSetAttachmentId<ipc::Service>| {
    if attachment_id.has_event_from(interval_1_guard) {
        // Action for events every 2 seconds
    } else if attachment_id.has_event_from(interval_2_guard) {
        // Action for events every 3 seconds
    } else if attachment_id.has_event_from(deadline_guard) {
        // Received data within the 1-second deadline
    } else if attachment_id.has_missed_deadline(deadline_guard) {
        // No data received within the 1-second deadline
    }
    CallbackProgression::Continue
};

// Wait and process events
waitset.wait_and_process(on_event)?;

You can also attach anything that implements SynchronousMultiplexing. For example, if you have a socket or file descriptor that works with C calls like epoll or select, you simply need to implement the trait, and it can then be used with WaitSet::attach_deadline() or WaitSet::attach_notification().

Checkout event multiplexing example and event based communication example for more details.

rmw_iceoryx2

We have released the first version of the iceoryx2 RMW ROS 2 binding, allowing iceoryx2 to be used as a transport layer for ROS 2. This provides a significant reduction in latency while lowering the CPU and memory load on your system.

Another advantage is that it offers a pathway to transition from a ROS 2 system to a safety-certified system. The iceoryx2 layer will soon be certifiable and capable of communicating with all ROS 2 nodes. This allows you to port applications to the iceoryx2 API one by one, as needed for certification, while maintaining communication with the rest of the system. This approach allows a smooth and incremental transition from proof-of-concept to production-ready systems.

What’s Next?

Check out our Roadmap to stay updated on our plans and progress.

For the next release, we plan to focus on:

  • Request/Response
  • Python Language Binding
  • Gateways: We start with zenoh, and then other protocols like MQTT or DDS will follow.
  • OS Support: QNX (maybe Android & iOS support in sandbox mode)
  • Expanded Documentation: Recognizing the complexities of inter-process communication, we are enhancing our documentation to:
    • Provide a gentle introduction for new users,
    • Explain iceoryx2’s features in detail,
    • Offer step-by-step tutorials to help you make the most of the library.
...

Announcing iceoryx2 v0.4.0

Christian Eltzschig - 28/09/2024

Header

What Is iceoryx2

iceoryx2 is a service-based inter-process communication (IPC) library designed to make communication between processes as fast as possible - like Unix domain sockets or message queues, but orders of magnitude faster and easier to use. It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.

Release v0.4.0

With today's iceoryx2 v0.4.0 release, we've achieved many of our milestones and are now close to feature parity with its predecessor, the trusty old iceoryx.

If you're wondering why you should choose iceoryx2 over iceoryx, here are some of its next-gen features:

  • No more need for a central daemon like RouDi.
  • It's up to 10 times faster thanks to a new, more efficient architecture.
  • More dynamic than ever—no more compile-time memory pool configuration.
  • Advanced Quality of Service (QoS) settings.
  • Extremely modular: every aspect of iceoryx2 can be customized, allowing future support for GPUs, FPGAs, and more.
  • Completely decentralized and even more robust.
  • A restructured API and resource management system that enables a zero-trust policy for true zero-copy communication in the future.
  • Language bindings for C and C++ with CMake and Bazel support right out of the box. Python and other languages are coming soon.
  • Upcoming gateways to enable network communication via protocols like zenoh, DDS, MQTT, and more.

With this new release, we're faster than ever. On some platforms, latency is even under 100ns! Be sure to check out our iceoryx2 benchmarks and try them out on your platform.

benchmark

Highlights

Here are some of the feature highlights in v0.4.0:

  • C and C++ language bindings: We've added a range of new examples to help you get started with the supported languages.

    • Plus, there's a shiny new website: https://iceoryx2.readthedocs.io, where we're building a detailed introduction to inter-process communication, true zero-copy, and iceoryx2. Whether you're just getting started or looking to fine-tune every feature to your needs, it's all in one place.

  • New build systems: C and C++ bindings come with support for:

    • Bazel & CMake

    • colcon: We're working on iceoryx2_rmw, which will be unveiled at ROSCon 2024 during Mathias' talk: "iceoryx2: A Journey to Becoming a First-Class RMW Alternative."


  • iceoryx2 nodes: Nodes are the central entity handling all process-local resources, such as ports, and are key to monitoring other processes and nodes. If a process crashes, the others will clean up resources as soon as the issue is detected.

  • Command-line debugging and introspection: Meet iox2. If you want to see which services or nodes are running—or if you're curious about the details of a service or process—this is your go-to tool.

  • Runtime-sized services: We've overcome the compile-time memory configuration limitation of iceoryx1. If you want to send a dynamic-sized typed array (like a Rust slice), you can set up the service and publisher with a runtime worst-case size. If that's insufficient, you can create a new publisher with a larger array size.

  • Advanced service and port configurations: For specialized use cases, like SIMD or FPGA, you can define custom alignments for your service's payload.

  • User-defined service attributes: You can now set custom key-value pairs to tag services with additional properties. Check out the iceoryx2 Deep Dive - Service Attributes for more details.

  • iceoryx2 Domains: Separate multiple processes into domains, ensuring they don't interfere with one another.

  • Custom User Header: There's an interface for defining a custom header that is sent with every sample.

  • 32-bit support: iceoryx2 now runs on 32-bit machines, and long-term, we aim to support mixed-mode zero-copy communication between 32-bit and 64-bit processes.

  • Placement new for iceoryx2-bb-containers: Since iceoryx2 can handle gigabytes of data, we provide a mechanism to loan memory and perform in-place initialization—something akin to C++'s placement new.

Sneak Peak: Mission Control

Our upcoming Mission Control Center will provide deep introspection and debugging for your iceoryx2 system. You’ll be able to monitor the CPU, memory, and I/O load of every process. You can also view the frequency and content of message samples, inspect individual nodes with their running services, and visualize how nodes and services are connected—all in real time.

Header

Stay tuned for its release at the end of this year!

What’s Next?

Check out our Roadmap.

In the next release, we plan to focus on:

  • Finalizing the C/C++ language bindings: Most of the Rust functionality works, but features like dynamic slice support and service attributes are still in progress.
  • Event multiplexing: We’re extending Node::wait() for more streamlined event handling.
    • This will come with advanced integrated events, such as push notifications for system events like process crashes or service changes.
    • Expect detailed examples and documentation.
  • Services with dynamic payloads: You won’t need to define a fixed payload size for services with slices anymore. We’ll introduce an allocation algorithm that acquires more shared memory as needed, and it’ll be customizable.
  • Health monitoring: With iceoryx2 nodes, we can detect dead nodes and clean up their resources. The next step is to actively notify processes when a sender or receiver dies.
  • Expanded documentation: Inter-process communication can be complex, so we’re working on extending the docs to provide a gentle introduction, explain iceoryx2's features in detail, and offer a step-by-step tutorial on making the most of it.
...