libtcspc C++ API
Streaming TCSPC and time tag data processing
Loading...
Searching...
No Matches
Processors

Description

Event processors.

Processors in libtcspc are usually classes defined in an internal namespace. They are exposed in the API through factory functions named after a verb describing what the processor does and return the processor by value. A few special processor factory functions (e.g., tcspc::merge()) return multiple processors that can be assigned via structured binding. The factory function, by convention, takes the downstream processor as the last parameter and takes ownership of it (or copies it, if lvalue).

All processors are movable but not necessarily copyable.

All processors (except for sinks) have a downstream processor. This downstream processor is moved into the next-upstream processor, so that an assembled processing graph is a single object (often with a very long type name). A few special processors (e.g., tcspc::broadcast(), tcspc::route()) have multiple downstream processors. Also, some special processors (e.g., tcspc::merge(), tcspc::type_erased_processor) do not contain their downstream as a subobject but they always own the downstream processor(s) even if by reference.

Thus a graph of processors can be built, but this must be done from downstream to upstream.

Once built, the processing graph operates in push mode: events are passed from upstream processors to downstream processors by function calls. Each processor is basically a state machine that changes state based on events received, and in some cases emits events to the downstream processor(s). The set of event types accepted by a given processor is determined by C++ type rules based on the processor's handle() member function overload set. The end of the stream of events is signaled down the chain of processors via the flush() member function. Processing may also terminate due to an error (see below).

Processor factory functions never call handle() or flush() on downstream processors. After construction, processors must always be prepared to receive any of their accepted events while processing continues (but they may signal an error if the sequence of events is incorrect). Behavior is undefined if handle() or flush() is called on a processor that has been flushed or has stopped with an error.

Unless specified otherwise, processors operate on a single thread.

Processors implement the following member functions (none of which should be noexcept):

  • void handle(E const &event), possibly with multiple overloads and/or with E as a template parameter. These functions handle individual events by updating the processor's internal state and emitting events downstream by calling the downstream processor's handle() function.
  • void handle(E &&event). This is optional and not necessary when event is never forwarded downstream or is known to be of trivial type.
  • void flush(), which conveys the end of stream. The processor emits any remaining events (due, for example, to buffered state), and flushes its downstream.
  • Introspection functions (see the return types for details):

End of processing and error handling

When the input data has reached its end, the flush() function is used to propagate this information down the chain of processors, giving them a chance to propagate any remaining events originating from the events already received.

A processor's handle() and flush() functions may throw an exception under two circumstances:

  • The processor reached a normal end of processing, for example because it detected the end of the part of the input that is of interest. In this case, the processor first calls flush() on its downstream(s). Then (if the call to downstream flush() did not throw) it throws tcspc::end_of_processing (derived from std::exception).
  • The processor encountered an error. In this case it throws an appropriate exception (always derived from std::exception) without flushing the downstream.

Warnings

For recoverable errors, some processors emit tcspc::warning_event rather than throwing an exception. The tcspc::stop() and tcspc::stop_with_error() processors can be used to end processing on a warning event.

Context, trackers, and accessors

See tcspc::context.

Guidelines for writing processors

In addition to following what is specified above:

  • Processor constructors (or factory functions) should check arguments and throw std::logic_error (or one of its derived exceptions) if incorrect. This is for playing nicely with the Python bindings; do not use assert().
  • The downstream processor should usually be the last non-static data member of the processor class, so that the overall data layout mirrors the order of processing. Cold data (such as data that is only accessed when finished processing) should be placed after the downstream member.
  • Ordinary data-processing processors should follow the Rule of Zero. Avoid const or reference data members.
  • When passing an lvalue (local variable or data member) event downstream, and the event value needs to be reused afterwards, the event should be passed using std::as_const(). Conversely, if the local event will not be reused, it may be passed using std::move().
  • The handle() member function is often overloaded for multiple event types, some of which may be template parameters, possibly constrained by a requires clause. Choose carefully between a requires clause and static_assert when specifying requirements on the handled event types, because they have different implications. Requirements specified by a requires clause are detected by tcspc::handler_for and will prevent the overload from competing with other overloads when not satisfied. Requirements specified by static_assert, if not satisfied, will cause a compile error after the overload has been selected. There are use cases for both.
  • When implementing handle() for both const lvalue and rvalue references, make sure that a generic forwarding reference handler does not shadow a specific const lvalue handler.

Topics

 Core processors
 Basic and generic processors.
 Buffering processors
 Processors for buffering data.
 Branching processors
 Processors for splitting the processing graph.
 Merging processors
 Processors for joining branches in the processing graph.
 Input and output processors
 Processors for reading and writing data from/to file-like streams.
 Acquisition processors
 Processors for acquiring data from hardware devices.
 Decoding processors
 Processors for decoding device events.
 Timeline processors
 Processors for managing and manipulating the absolute timeline.
 Timing signal processors
 Processors for transforming timing signal events.
 Time correlation processors
 Processors for time correlation.
 Histogramming processors
 Processors for histogramming.
 Validation processors
 Processors for data validation.
 Statistics processors
 Processors for collecting statistics.
 Testing processors
 Processors for unit testing of processors.