|
libtcspc C++ API
Streaming TCSPC and time tag data processing
|
Processors for reading and writing data from/to file-like streams.
Topics | |
| Binary stream processors | |
| Processors for converting between events and binary data streams. | |
Functions | |
| template<typename Event, typename Downstream> | |
| auto | tcspc::extract_bucket (Downstream downstream) |
| Create a processor that extracts the bucket carried by an event. | |
| template<typename Event, typename InputStream, typename Downstream> | |
| auto | tcspc::read_binary_stream (InputStream stream, arg::max_length< std::uint64_t > max_length, std::shared_ptr< bucket_source< Event > > buffer_provider, arg::granularity< std::size_t > granularity, Downstream downstream) |
| Create a source that reads batches of events from a binary stream, such as a file. | |
| template<typename OutputStream> | |
| auto | tcspc::write_binary_stream (OutputStream stream, std::shared_ptr< bucket_source< std::byte > > buffer_provider, arg::granularity< std::size_t > granularity) |
| Create a sink that writes bytes to a binary stream, such as a file. | |
| auto tcspc::extract_bucket | ( | Downstream | downstream | ) |
Create a processor that extracts the bucket carried by an event.
| Event | the event type, which must have the public data member data_bucket |
| Downstream | downstream processor type (usually deduced) |
| downstream | downstream processor |
| auto tcspc::read_binary_stream | ( | InputStream | stream, |
| arg::max_length< std::uint64_t > | max_length, | ||
| std::shared_ptr< bucket_source< Event > > | buffer_provider, | ||
| arg::granularity< std::size_t > | granularity, | ||
| Downstream | downstream ) |
Create a source that reads batches of events from a binary stream, such as a file.
The stream is either libtcspc's input stream abstraction (see Input streams) or an iostreams std::istream. In the latter case, it is wrapped using tcspc::istream_input_stream(). (Use of iostreams is not recommended due to usually poor performance.)
The stream must contain a contiguous array of Event, which must be a trivial type. Events are read from the stream in batches and placed into tcspc::bucket<Event> instances supplied by the given buffer_provider.
Each time the stream is read, events that have been completely read are sent downstream in a bucket. The size of each read is controlled by granularity (bytes) and the size of Event. When the former is not smaller, it is used as the read size. When the size of Event is larger, multiples of granularity may be read at once. The first read may be adjusted to a smaller size to align subsequent read offsets to the read granularity. The last read may be adjusted to a smaller size to avoid reading past max_length.
The granularity can be tuned for best performance. If too small, reads may incur more overhead per byte read; if too large, CPU caches may be polluted. Small batch sizes may also pessimize downstream processing. It is best to try different powers of 2 and measure.
| Event | the event type |
| InputStream | input stream type |
| Downstream | downstream processor type |
| stream | the input stream (see Input streams) |
| max_length | maximum number of bytes to read from stream (should be a multiple of sizeof(Event), or std::numeric_limits<std::size_t>::max() to read to the end of the stream) |
| buffer_provider | bucket source providing event buffers; must be able to circulate at least 2 buckets without blocking |
| granularity | minimum size, in bytes, to read in each iteration; a multiple of this value may be used if Event is larger |
| downstream | downstream processor |
| auto tcspc::write_binary_stream | ( | OutputStream | stream, |
| std::shared_ptr< bucket_source< std::byte > > | buffer_provider, | ||
| arg::granularity< std::size_t > | granularity ) |
Create a sink that writes bytes to a binary stream, such as a file.
The stream is either libtcspc's output stream abstraction (see Output streams) or an iostreams std::ostream. In the latter case, it is wrapped using tcspc::ostream_output_stream(). (Use of iostreams is not recommended due to usually poor performance.)
The processor receives data in the form of tcspc::bucket<std::byte> or another type that can be explicitly converted to std::span<std::byte const> (see tcspc::view_as_bytes()). The bytes are written sequentially and contiguously to the stream.
For efficiency, data is written in batches whose size is a multiple of granularity (except possibly at the beginning and end of the stream).
The granularity can be tuned for best performance. If too small, writes may incur more overhead per byte written; if too large, CPU caches may be polluted (if the event size and write granularity are such that buffering is necessary). It is best to try different powers of 2 and measure.
If there is an error (either in this processor or upstream), an incomplete file may be left (if the output stream was a regular file). Application code, if it so desires, should delete this file after closing it (by destroying the processor, if the file lifetime is tied to the output stream).
| OutputStream | output stream type |
| stream | the output stream (see Output streams) |
| buffer_provider | bucket source providing write buffers; must be able to circulate at least 1 bucket without blocking; may not be used if all events can be written directly |
| granularity | minimum size, in bytes, to write; all writes (except possibly the first and last ones, for alignment) will be a multiple of this value |