Posts by Topic: iceoryx2

Announcing iceoryx2 v0.5.0

Christian Eltzschig - 23/12/2024

Header

What Is iceoryx2

iceoryx2 is a service-based inter-process communication (IPC) library designed to build robust and efficient decentralized systems. It enables ultra low-latency communication between processes — similar to Unix domain sockets or message queues, but significantly faster and easier to use.

Check out the iceoryx2 benchmarks and try them out on your platform!

benchmark

It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.

Release v0.5.0

The iceoryx2 v0.5.0 release, introduces several features to improve the user experience, including support for dynamic payloads, allowing publishers to handle variable payload sizes without the need to predefine maximum payload sizes. It also introduces health monitoring informing users when a process crashes in a decentralized system. The new WaitSet enables efficient event management by waiting on multiple events like listeners or sockets in a single call. Furthermore, it comes with advanced features such as deadlines and intervals to manage timing constraints. This makes a decentralized architecture more robust by allowing proactive handling of missed notifications and failed processes.

Feature Deep-Dive

Dynamic Payload Memory

One highly requested feature was the ability for publishers to handle dynamic payloads. This means you no longer need to know the payload size in advance, and the publisher can reallocate memory as needed. While this might seem trivial, it’s quite challenging in the context of shared-memory-based communication.

The dynamic payload works in combination with the slice API by simply defining an AllocationStrategy when creating a new publisher, and iceoryx2 automatically handles memory reallocation for you.

Here’s an example:

let service = node
    .service_builder(&"Service With Dynamic Data".try_into()?)
    .publish_subscribe::<[u8]>()
    .open_or_create()?;

let publisher = service
    .publisher_builder()
    // We guess that the samples are at most 16 bytes in size.
    // This is just a hint to the underlying allocator and is purely optional.
    // A better guess minimizes reallocations.
    .initial_max_slice_len(16)
    // The underlying sample size will increase using a power-of-two strategy
    // when [`Publisher::loan_slice()`] or [`Publisher::loan_slice_uninit()`]
    // require more memory than currently available.
    .allocation_strategy(AllocationStrategy::PowerOfTwo)
    .create()?;

Checkout publish subscribe with dynamic data example for more details.

Health Monitoring

What happens when a critical process crashes, or a service disappears? Wouldn't it be great to have a mechanism that informs you immediately, instead of leaving you waiting indefinitely?

Health monitoring provides such a mechanism, ensuring you're kept informed about key system events like the appearance or disappearance of services, or the crash of critical processes. In iceoryx2, this is managed in a decentralized manner using Notifier ports.

The Notifier is now configurable to emit a "dead notification" when a process is identified as being down. Additionally, the new WaitSet has been introduced as a key component for efficiently managing notifications and events.

  • You can iterate over all nodes, and if a node is in a dead state (e.g., its owning process crashed), you can explicitly clean up stale nodes and inform other processes.
  • iceoryx2 also performs automatic dead-node checks during critical operations that might impact your system.
  • You can configure this behavior via global.node.cleanup-dead-nodes-on-creation and global.node.cleanup-dead-nodes-on-destruction in the iceoryx2 configuration.

Here’s an example implementation:

Node::<ipc::Service>::list(Config::global_config(), |node_state| {
    if let NodeState::Dead(state) = node_state {
        println!(
            "Detected dead node: {:?}",
            state.details().as_ref().map(|v| v.name())
        );
        state.remove_stale_resources().expect("");
    }

    CallbackProgression::Continue
})?;

You can also configure a Notifier to emit a signal as soon as it is identified as dead. This occurs when another process calls [DeadNode::remove_stale_resources()]. In such cases, the corresponding Listener will be woken up with a predefined EventId.

Here’s an example implementation:

let service_event = node
    .service_builder(&"MyEventName".try_into()?)
    .event()
    .notifier_created_event(EventId::new(1))
    .notifier_dropped_event(EventId::new(2))
    .notifier_dead_event(EventId::new(3))
    .open_or_create()?;

let listener = service_event.listener_builder().create()?;

if let Ok(Some(event_id)) = listener.blocking_wait_one(CYCLE_TIME) {
    match event_id {
        EventId::new(1) => println!("A new notifier was created."),
        EventId::new(2) => println!("A notifier was dropped."),
        EventId::new(3) => println!("A notifier was identified as dead."),
        _ => println!("An unknown event occurred: {:?}", event_id),
    }
}

Checkout health monitoring example for more details.

Details of the Key Concepts

  1. Notifier Events:

    • notifier_created_event: Triggered when a new Notifier port is created.
    • notifier_dropped_event: Triggered when a Notifier port is dropped.
    • notifier_dead_event: Triggered when a Notifier owned by a dead node is detected, and all its stale resources are successfully removed.
  2. Listener: The Listener listens for events emitted by the Notifier. When an event occurs, the listener wakes up and identifies the event based on the associated EventId.

  3. Dead Node Handling: The notifier_dead_event ensures the system can respond effectively to crashes by notifying the corresponding listeners.

WaitSet

The WaitSet is iceoryx2’s new event multiplexer, allowing you to wait on multiple events — such as iceoryx2 listeners or sockets — in a single call. Whether you’re waiting for incoming messages from a third-party network stack or iceoryx2 notifications, the WaitSet has you covered.

It also includes advanced features like deadlines and intervals:

  • Deadlines: Set timing constraints for incoming messages. If a message isn’t received within the defined timeframe, the WaitSet wakes you up and informs you which deadline was hit.
  • Intervals: Configure the WaitSet to wake you up at regular intervals.

And another example:

let waitset = WaitSetBuilder::new().create::<ipc::Service>()?;

// Attach intervals
let interval_1_guard = waitset.attach_interval(Duration::from_secs(2))?;
let interval_2_guard = waitset.attach_interval(Duration::from_secs(3))?;

// Attach a deadline and a notification
let deadline_guard = waitset.attach_deadline(my_listener, Duration::from_secs(1))?;
let notification_guard = waitset.attach_notification(another_listener)?;

// Define the callback to handle events
let on_event = |attachment_id: WaitSetAttachmentId<ipc::Service>| {
    if attachment_id.has_event_from(interval_1_guard) {
        // Action for events every 2 seconds
    } else if attachment_id.has_event_from(interval_2_guard) {
        // Action for events every 3 seconds
    } else if attachment_id.has_event_from(deadline_guard) {
        // Received data within the 1-second deadline
    } else if attachment_id.has_missed_deadline(deadline_guard) {
        // No data received within the 1-second deadline
    }
    CallbackProgression::Continue
};

// Wait and process events
waitset.wait_and_process(on_event)?;

You can also attach anything that implements SynchronousMultiplexing. For example, if you have a socket or file descriptor that works with C calls like epoll or select, you simply need to implement the trait, and it can then be used with WaitSet::attach_deadline() or WaitSet::attach_notification().

Checkout event multiplexing example and event based communication example for more details.

rmw_iceoryx2

We have released the first version of the iceoryx2 RMW ROS 2 binding, allowing iceoryx2 to be used as a transport layer for ROS 2. This provides a significant reduction in latency while lowering the CPU and memory load on your system.

Another advantage is that it offers a pathway to transition from a ROS 2 system to a safety-certified system. The iceoryx2 layer will soon be certifiable and capable of communicating with all ROS 2 nodes. This allows you to port applications to the iceoryx2 API one by one, as needed for certification, while maintaining communication with the rest of the system. This approach allows a smooth and incremental transition from proof-of-concept to production-ready systems.

What’s Next?

Check out our Roadmap to stay updated on our plans and progress.

For the next release, we plan to focus on:

  • Request/Response
  • Python Language Binding
  • Gateways: We start with zenoh, and then other protocols like MQTT or DDS will follow.
  • OS Support: QNX (maybe Android & iOS support in sandbox mode)
  • Expanded Documentation: Recognizing the complexities of inter-process communication, we are enhancing our documentation to:
    • Provide a gentle introduction for new users,
    • Explain iceoryx2’s features in detail,
    • Offer step-by-step tutorials to help you make the most of the library.
...

Announcing iceoryx2 v0.4.0

Christian Eltzschig - 28/09/2024

Header

What Is iceoryx2

iceoryx2 is a service-based inter-process communication (IPC) library designed to make communication between processes as fast as possible - like Unix domain sockets or message queues, but orders of magnitude faster and easier to use. It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.

Release v0.4.0

With today's iceoryx2 v0.4.0 release, we've achieved many of our milestones and are now close to feature parity with its predecessor, the trusty old iceoryx.

If you're wondering why you should choose iceoryx2 over iceoryx, here are some of its next-gen features:

  • No more need for a central daemon like RouDi.
  • It's up to 10 times faster thanks to a new, more efficient architecture.
  • More dynamic than ever—no more compile-time memory pool configuration.
  • Advanced Quality of Service (QoS) settings.
  • Extremely modular: every aspect of iceoryx2 can be customized, allowing future support for GPUs, FPGAs, and more.
  • Completely decentralized and even more robust.
  • A restructured API and resource management system that enables a zero-trust policy for true zero-copy communication in the future.
  • Language bindings for C and C++ with CMake and Bazel support right out of the box. Python and other languages are coming soon.
  • Upcoming gateways to enable network communication via protocols like zenoh, DDS, MQTT, and more.

With this new release, we're faster than ever. On some platforms, latency is even under 100ns! Be sure to check out our iceoryx2 benchmarks and try them out on your platform.

benchmark

Highlights

Here are some of the feature highlights in v0.4.0:

  • C and C++ language bindings: We've added a range of new examples to help you get started with the supported languages.

    • Plus, there's a shiny new website: https://iceoryx2.readthedocs.io, where we're building a detailed introduction to inter-process communication, true zero-copy, and iceoryx2. Whether you're just getting started or looking to fine-tune every feature to your needs, it's all in one place.

  • New build systems: C and C++ bindings come with support for:

    • Bazel & CMake

    • colcon: We're working on iceoryx2_rmw, which will be unveiled at ROSCon 2024 during Mathias' talk: "iceoryx2: A Journey to Becoming a First-Class RMW Alternative."


  • iceoryx2 nodes: Nodes are the central entity handling all process-local resources, such as ports, and are key to monitoring other processes and nodes. If a process crashes, the others will clean up resources as soon as the issue is detected.

  • Command-line debugging and introspection: Meet iox2. If you want to see which services or nodes are running—or if you're curious about the details of a service or process—this is your go-to tool.

  • Runtime-sized services: We've overcome the compile-time memory configuration limitation of iceoryx1. If you want to send a dynamic-sized typed array (like a Rust slice), you can set up the service and publisher with a runtime worst-case size. If that's insufficient, you can create a new publisher with a larger array size.

  • Advanced service and port configurations: For specialized use cases, like SIMD or FPGA, you can define custom alignments for your service's payload.

  • User-defined service attributes: You can now set custom key-value pairs to tag services with additional properties. Check out the iceoryx2 Deep Dive - Service Attributes for more details.

  • iceoryx2 Domains: Separate multiple processes into domains, ensuring they don't interfere with one another.

  • Custom User Header: There's an interface for defining a custom header that is sent with every sample.

  • 32-bit support: iceoryx2 now runs on 32-bit machines, and long-term, we aim to support mixed-mode zero-copy communication between 32-bit and 64-bit processes.

  • Placement new for iceoryx2-bb-containers: Since iceoryx2 can handle gigabytes of data, we provide a mechanism to loan memory and perform in-place initialization—something akin to C++'s placement new.

Sneak Peak: Mission Control

Our upcoming Mission Control Center will provide deep introspection and debugging for your iceoryx2 system. You’ll be able to monitor the CPU, memory, and I/O load of every process. You can also view the frequency and content of message samples, inspect individual nodes with their running services, and visualize how nodes and services are connected—all in real time.

Header

Stay tuned for its release at the end of this year!

What’s Next?

Check out our Roadmap.

In the next release, we plan to focus on:

  • Finalizing the C/C++ language bindings: Most of the Rust functionality works, but features like dynamic slice support and service attributes are still in progress.
  • Event multiplexing: We’re extending Node::wait() for more streamlined event handling.
    • This will come with advanced integrated events, such as push notifications for system events like process crashes or service changes.
    • Expect detailed examples and documentation.
  • Services with dynamic payloads: You won’t need to define a fixed payload size for services with slices anymore. We’ll introduce an allocation algorithm that acquires more shared memory as needed, and it’ll be customizable.
  • Health monitoring: With iceoryx2 nodes, we can detect dead nodes and clean up their resources. The next step is to actively notify processes when a sender or receiver dies.
  • Expanded documentation: Inter-process communication can be complex, so we’re working on extending the docs to provide a gentle introduction, explain iceoryx2's features in detail, and offer a step-by-step tutorial on making the most of it.
...

ekxide's iceoryx2 Deep Dive - Service Attributes

Christian Eltzschig - 15/06/2024

With this article, we are starting a new series where we dive deep into the current development progress of iceoryx2. We'll explain the newest features, the problems they solve, and the cool new things you can build with iceoryx2. This series will let you see what our open-source company, ekxide IO GmbH, is currently working on. It also allows us to collect feedback from the community, plan and refine new features, and discover interesting projects using iceoryx2.

For those who are not familiar with iceoryx2: it is an open-source library that handles reliable and incredibly fast inter-process communication, suitable for applications ranging from desktops to mission-critical systems like cars or medical devices. So, if you need to send data or signal events from process A to process B, iceoryx2 is your go-to library.

https://github.com/eclipse-iceoryx/iceoryx2

The Problem

What problem does iceoryx2 solve? It is a service-oriented inter-process middleware where you can create services with a name and send data or signals to other processes.

Assume you are building a robot with multiple camera sensors. You may have several services producing video streams, publishing them on services like "camera:front," "camera:back," "camera:left," and so on. In iceoryx2, you could implement it like this:

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .create()?;

let publisher = service.publisher().create()?;

loop {
    let sample = publisher.loan_uninit()?;
    sample.write_payload(get_camera_image()).send()?;
}

If you now write a process that requires a video stream to detect obstacles and perform an emergency brake if necessary, you could easily subscribe to such a service like this:

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .open()?;

let subscriber = service.subscriber().create()?;

loop {
    if let Some(image) = subscriber.receive()? {
        perform_some_processing(*image);
    }
}

But what if you have another service that wants to create high-quality snapshots of the scenes the robot captures? It would be advantageous if the service used a 4k camera. Or, if the robot is moving at high speed, it would be preferable to use only services where the camera produces images at a rate of 60 frames per second.

Where could we store this information for the consumers of the data? We could add this in the header of the message, but for efficiency, this is not the best place to write this information repeatedly, especially when it never changes.

The solution is service attributes.

Service Attributes

Service attributes are key-value pairs that remain constant during the service's lifetime. They can be set when the service is created and can be read by any participant and during service discovery.

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .create_with_attributes(
        &AttributeSpecifier::new()
            .define("camera-resolution", "1920x1080")
            .define("frames-per-second", "60"),
    )?;

When you perform a service discovery, you immediately see what attributes are set and can select the right service that satisfies all your requirements. It also allows you to acquire additional information about the counterpart.

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .open()?;

for attribute in service.attributes().iter() {
    println!("{} = {}", attribute.key(), attribute.value());
}

Another option is to define the service attributes as requirements. For instance, it could be important that a specific key is defined without considering the value, or that a specific key-value pair is defined. Let's go back to our example and assume that we do not care about the camera resolution as long as it is defined as an attribute, but we need 60 frames per second to perform an emergency brake.

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .open_with_attributes(
        &AttributeVerifier::new()
            .require_key("camera-resolution")
            .require("frames-per-second", "60"),
    )?;

One of our internal iceoryx2 use cases for service attributes is gateways. When you want to forward a message from iceoryx2 via the MQTT protocol, you may want to use a different service name. Sometimes it is even mandatory since the protocol does not support the iceoryx2 naming scheme, like Some/IP. With service attributes, we can now define the translation for the gateway directly in the service and specify that the "camera:front" service should map to the MQTT service "camera/front."

let service = zero_copy::Service::new(ServiceName::new("camera:front")?)
    .publish_subscribe::<CameraImage>()
    .create_with_attributes(
        &AttributeSpecifier::new()
            .define("mqtt-service-name", "camera/front"),
    )?;

What's Next?

One of the things that are missing is a mechanism to make the attributes more scalable. We need to come up with a configuration file or another innovative solution that allows us to define attributes, such as the iceoryx2 service to MQTT service mapping, in a more centralized manner rather than hardcoding them in the code. Let's see what we can come up with — we'll keep you posted.

Happy Hacking...

Announcing iceoryx2 v0.3.0

Christian Eltzschig - 18/04/2024

Today, I am happy to announce iceoryx2 v0.3.0. The release comes with cool new features, improved documentation and additional examples.

So here we go.

Features & Improvements

Communication Between Docker Containers

With iceoryx2, you can establish zero-copy communication between multiple docker containers. Since iceoryx2 is just using shared memory and some files stored in /tmp/iceoryx2 for communication, all you have to do is to share /tmp/iceoryx2 and /dev/shm with all your docker containers, and everything works. We created a docker example that explains all the little details.

Note: All paths and naming schemes can be configured via a config file. For more details and documentation, take a look at the iceoryx2 default configuration

Services Without Lifetime Parameters

In v0.2, every endpoint and payload sample in iceoryx2 had generic lifetime parameters. The idea was that a service is, from a high-level point of view, a factory of endpoints like publishers or subscribers. Those endpoints were again factories for samples. For instance, a subscriber "produces" a sample when the call my_subscriber::receive() returns the received sample. Under the hood, the service created system resources that had to live as long as any endpoints or samples were active. Therefore, the service must live as long as an endpoint and an endpoint at least as long as a sample.

But you run into trouble when you would like to store samples from different endpoints, with different lifetimes, in a Vec to cache them for later. But thanks to Arc, which allowed us to share the ownership of those resources, the problem is gone, and the API is now much easier to use.

Sending Complex Data

Usually, you want to send more complex data than just arrays of integers via shared memory. This is why iceoryx2-bb-containers becomes public API with this iceoryx2 release. It comes with compile-time fixed-size versions of Queue, Vec, and ByteString that can be used as building blocks for transmission types.

use iceoryx2_bb_container::{
    byte_string::FixedSizeByteString, vec::FixedSizeVec,
};

#[derive(Debug, Default)]
#[repr(C)]
pub struct ComplexDataType {
    text: FixedSizeByteString<8>,
    vec_of_data: FixedSizeVec<u64, 4>,
}

If you would like to see a complete working example, take a look at the complex data types example

Note: I know defining the capacity at compile-time is not yet perfect. However, we are working on runtime dynamic data types based on relocatable containers that will be available with an upcoming release.

Then, you can define your transmission types without any compile-time restrictions.

use iceoryx2_bb_container::vec::RelocatableVec;

#[derive(Debug, Default)]
#[repr(C)]
pub struct ComplexDataType {
    some_data: RelocatableVec<u64>,
    other_data: RelocatableVec<f32>,
}

Improved Event Communication

The event messaging pattern is iceoryx2's basic building block for async operations and push notifications. The new release is based on the ported C++ iceoryx1 bitset, which solves the problem of a limited queue buffer on the listener side.

When a Notifier sends notifications with their EventIds in a busy loop, the buffer is filled quickly, and other Notifiers cannot send their notifications to the Listener. A bitset where the Notifier flips a bit corresponding to the EventId solves the issue.

Furthermore, we refined the API so that you can choose to take either one EventId after another in a loop:

for event_id in listener.blocking_wait_one()? {
    println!("event was triggered with id: {:?}", event_id);
}

or to acquire all received EventIds at once

listener.blocking_wait_all(|id| {
    println!("event was triggered with id: {:?}", id);
})?;

Bug Fixes

A big thanks to our first users, who started playing around with iceoryx2 and helped us refine the API and iron out the edges.

We fixed a ton of bugs!

Most bugs were connected to the decentralized nature of iceoryx2, and we encountered some races when endpoints connected and disconnected at a high frequency. However, many additional concurrent stress tests now give us the confidence that they stay fixed.

The communication mechanism did not raise bug reports, mainly because they were proven-in-use for years in iceoryx1 and were just ported to iceoryx2.

Performance Improvements

We took some time to improve the performance of iceoryx2 even further and realized that we hit a limit where the performance becomes very architecture/OS dependent. Look at the iceoryx2 readme where we provide an overview of our results.

What Comes Next

Take a look at our Roadmap.

In Q2 we want to focus:

  • on our first language binding to C
  • introduce advanced monitoring so that manual cleanups are no longer required when an application has crashed
  • on sending serializable structs via shared memory so that any kind of type - without restriction - can be sent
...