ePIC Streaming WG meeting: Streaming Reco Follow up

US/Eastern
Description

We used this working meeting to follow up on Nathan’s talk on time-frame-based reconstruction, solidify a few open concepts in our WG, and make progress on their implementation in offline software

 

Time frame structure

 

Jeff: any real reason for time slice alignment with 2^16 and beam rotation? If we choose to align with EIC beam rotation, bias management with bunch crossing in their position in the time window would be critical. 

Jin: will also be a topic in the coming DAQ meeting  https://indico.bnl.gov/event/22945/ 

 

Consensus: preference not to align time frame length with respect to the EIC beam rotation. 

 

Discussion 1: event keying

 

David: Option 2 (run/timeframe/counter) can be derived from Option 1 (64bit Beam counter)

Jeff: like Option 1. Likely both in implementation 

Markus: Like both. Also works for option 2 as (time-frame, crossing counter in frame)

Nathan: Option 2 has a good matching of internal JANA2 bookkeeping. 

Markus: Are we using the event number for analysis? An example use case is the event display

Jin: A very useful use case example. And sPHENIX-approved event display events were IDed with the beam crossing counter. 

Marco: Beam crossing counter seems unique 

Markus: Analysis has a subset of events, which could be next layering with a translation (e.g DAQ event to reconstruction event)

Jeff/Marco: second Markus. We need to identify bunch crossings and event types early on in reco, analogy to trigger scheme (in offline software)

Nathan: Filtering needs for reco framework, each could have its relative counter. 

 

Consensus: both options for event keying will be used, with the primary key being option 1 (64bit BCO); Further, reconstruction will generate and tag an Event counter when events are formed from time frames. 

 

 

Discussion 2: what is an (DAQ) Run for ePIC?

 

Jeff: FEE takes minutes to config, which leads to running structure to config at the same time

Jeff: Scalars, running continuously to monitor beam/detector; 

Jeff: Define Run as detector configuration; 

Markus: continuous scalar info is also analyzed by streaming computing; human-configured run type

Simon: lumi prefers to run as much as possible, other than beam tuning

Jin: run structure in sPHENIX/PHENIX use ~1hr long runs which mark configuration changes

David: GlueX/Clas12 use the Run number to index calibration, configuration, and QA

Kolja: STAR similar (1/2hr Run) for calibration/configuration. There were also Beam energy scan runs, which use very short fills. 

Markus: The streaming readout is always running. The purpose of a DAQ Run could be for humans to identify problematic periods. 

 

Consensus: run structure will be used, driven by configuration changes; plus continuous readout information on beam/detector monitoring

 

Discussion 3: slow control (SC) data

 

Nathan: (1) advocate in slow control data in the database. (2) need to consider how calibrations are loaded from the database 

David: Redundancy is important, so good to have both database and data file copies of SC data.

David/Jin: good to use database copy as the primary slow control information source.  

David/Jin: In the file copy of the SC data, we can consider direct embedding of SC data into the fast readout data, or separate SC data file. Direct embedding is safer given database already provides a convenient access path. 

Markus: what is the data flow slow control data from online to offline? Shall we of two streams (raw data file and database copying) or one stream (raw data file then extract slow control information to the database)? Jin: should follow up on data flow discussion with online database folks. 

 

Consensus: redundant information in storing slow control data: database and raw data file embedding. Need to follow up on the implementation of SC data flow from online to offline. 

There are minutes attached to this event. Show them.