Advanced Messaging Patterns - Blackboard

Christian Eltzschig - 12/04/2025

iceoryx2 rust messaging-patterns

Our zero-copy communication library iceoryx2 already provides several messaging patterns that are well-suited for common use cases. For example, publish-subscribe is ideal for distributing sensor data to multiple consumers, request-response can be used to send commands—such as instructing a robot to move—and to receive a reply once the action is completed, and event messaging is perfect for waking up another process based on specific, user-defined events. A typical example: a robot searches for an object and sends a signal to another process once it has been found.

However, even these flexible messaging patterns, despite being zero-copy, can reach their limits in certain inter-process communication scenarios.

In this article, we discuss two such use cases and introduce a new messaging pattern called blackboard, which has completed the design phase and will be implemented in Q2 2025 in iceoryx2.

The blackboard messaging pattern is based on a key-value store. All keys have the same shared-memory-compatible type, like a fixed-size string or an integer, and every value can have a different shared-memory-compatible type.

The pattern shines when:

  • there is a large number of readers,
  • the update frequency is either low
    • or only a few values are updated at a high frequency,
    • or individual consumers are only interested in a small subset of key-value pairs,
  • and the readers do not require a history or previous values.

In network protocols, such a messaging pattern would be hard to realize, but when shared memory is available, it feels quite natural to implement such a concept.

Examples

The Settings Daemon for an Embedded Device

One example comes from embedded devices. Assume you have an autonomous machine like a self-driving car. It might run tens or hundreds of processes that communicate with each other and depend on machine-specific configurations.

These configurations might even change at runtime. For instance, if a sensor fails or the environment changes, such as reducing maximum speed when sensor functionality is degraded (e.g., driving into the sunset after rain on a wet road).

Again, many processes need access to a global configuration, and one process oversees and updates it as needed.

The Strategy Game

Assume you want to create a strategy game using multiple processes. You have one process that renders the game and handles user interactions. Another process handles the background simulation - determining where NPCs are, how units behave when given orders, and so on. Additionally, there may be separate processes that provide small AIs to control local behavior - such as deciding what a unit should do when under fire in a specific location.

All these processes need to be aware of the overall game state, which is managed by the background simulation. The renderer needs to know object positions, and the AI processes need to understand their local environment.

In the end, multiple processes need access to the shared game state, which is updated by a single process every iteration.

Problem with Publish-Subscribe

When using publish-subscribe in iceoryx2, the service guarantees that it can always operate and never runs out of payload memory by pre-allocating sufficient memory. The more subscribers, the larger their buffer sizes or history requirements, the more memory iceoryx2 must allocate upfront.

This can significantly increase memory usage, as iceoryx2 must assume worst-case scenarios. For example, if you have N subscribers and each holds onto a different sample indefinitely, and each has a buffer size of B, then every publisher data segment must allocate at least N * B payload samples.

In cases like the strategy game or the settings daemon, with many readers (hundreds or thousands) and large global state (e.g., 1MB), iceoryx2 might require 1GB of pre-allocated memory to support the worst case.

Another issue arises when delivering data to many endpoints. Even with zero-copy, an 8-byte offset must be delivered. This is usually done by iterating an internal list and pushing the offset into each subscriber’s internal queue. With more subscribers, latency increases.

A quick measurement using the iceoryx2 benchmarks reveals:

cargo run --bin benchmark-publish-subscribe --release -- \
    --bench-all --number-of-additional-subscribers 0
# latency: 90ns

cargo run --bin benchmark-publish-subscribe --release -- \
    --bench-all --number-of-additional-subscribers 10
# latency: 400ns

cargo run --bin benchmark-publish-subscribe --release -- \
    --bench-all --number-of-additional-subscribers 100
# latency: 3400ns

cargo run --bin benchmark-publish-subscribe --release -- \
    --bench-all --number-of-additional-subscribers 200
# latency: 7400ns

Compared to network protocols, performance is still significantly better. For instance, the best solutions for single back-and-forth communication show a latency of ~5000ns. See: A Performance Study on the Throughput and Latency of Zenoh, MQTT, Kafka, and DDS

Nevertheless, two key questions remain:

  • Can we reduce memory usage and make it independent of the number of consumers/subscribers and their configuration?
  • Can we reduce latency and make it independent of the number of consumers/subscribers and their configuration?

The solution is the blackboard messaging pattern!

Blackboard

A service using the blackboard messaging pattern can be created like any other service in iceoryx2:

const KEY_VALUE_1: u64 = 1;
const KEY_VALUE_2: u64 = 2;

let blackboard_service = node
    .service_builder(&"My Service Name".try_into()?)
    // Creates a blackboard service with u64 as key type.
    // Any shared-memory compatible type can be used.
    .blackboard::<u64>()
    // Adds a key-value pair for KEY_VALUE_1 with SomeType as value type
    .add::<SomeType>(KEY_VALUE_1, SomeType::new(123))
    // Adds a key-value pair for KEY_VALUE_2 with AnotherType as value type
    .add::<AnotherType>(KEY_VALUE_2, AnotherType::new(456))
    .create()?;

Initially, the service will support a single writer, and the key-value pairs must be defined at creation. Later, we plan to allow dynamic key-value additions/removals, potentially from multiple writers.

The writer process looks like this:

let writer = blackboard_service.writer_builder().create()?;
let entry = writer.entry::<SomeType>(KEY_VALUE_1)?;
entry.update(SomeType::new(7));

The reader processes can look like this:

let reader = blackboard_service.reader_builder().create()?;
let entry = reader.entry::<SomeType>(KEY_VALUE_1)?;
println!("The entry has the value {}", *entry);

Notification on Value Update

If values should only be read when updated rather than using a polling loop, an additional wakeup mechanism is needed. iceoryx2 provides the event messaging pattern for this. (See the event example).

The idea is to create an additional service with the same name using the event pattern. Besides the writer, we create a notifier that sends a notification, using the key as event ID, whenever a value is updated. On the reader side, a listener waits for notifications and retrieves the new value when interested.

Add the following to the writer process:

let event_service = node
    .service_builder(&"My Service Name".into())
    .event()
    .open_or_create()?;

let notifier = event_service.notifier_builder().create()?;

// Update the value first
let entry = writer.entry::<SomeType>(KEY_VALUE_1)?;
entry.update(SomeType::new(5912));

// Notify the readers
notifier.notify_with_custom_event_id(EventId::new(KEY_VALUE_1 as _))?;

The reader process adds:

let reader = blackboard_service.reader_builder().create()?;
let entry = reader.entry::<SomeType>(KEY_VALUE_1)?;

let event_service = node
    .service_builder(&"My Service Name".into())
    .event()
    .open_or_create()?;

let listener = event_service.listener_builder().create()?;

// Wait for a notification or timeout
if let Ok(Some(key)) = listener.timed_wait_one(TIMEOUT) {
    // We are only interested in updates to KEY_VALUE_1
    if key == KEY_VALUE_1 {
        println!("Entry was updated to {}", *entry);
    }
}

Conclusion

The blackboard messaging pattern is a powerful tool when:

  • A large number of readers need to consume a large data structure,
  • But each is only interested in a small subset of the data,
  • And updates are infrequent.

Typical use cases include global configuration settings adjusted at runtime, or a global state updated regularly, where each participant is only interested in a small portion of it.