playnano.processing.pipeline module¶
Module containing the ProcessingPipeline class for AFMImageStack processing.
This module provides ProcessingPipeline, which runs a sequence of mask/filter/method/plugin steps on an AFMImageStack. Each step’s output is stored in stack.processed (for filters) or stack.masks (for masks), and detailed provenance (timestamps, parameters, step type, version info, keys) is recorded in stack.provenance[“processing”]. Environment metadata at pipeline start is recorded in stack.provenance[“environment”].
- class playnano.processing.pipeline.ProcessingPipeline(stack: AFMImageStack)[source]¶
Bases:
objectOrchestrates a sequence of masking and filtering steps on an AFMImageStack.
This pipeline records outputs and detailed provenance for each step. Each step is specified by a name and keyword arguments:
"clear": resets any active mask.Mask steps: compute boolean masks stored in
stack.masks[...].Filter/method/plugin steps: apply to the current data (and mask if present), storing results in
stack.processed[...].
Provenance for each step, including index, name, parameters, timestamp, step type, version, keys, and summaries, is appended to
stack.provenance["processing"]["steps"]. Additionally, a mapping from step name to a list of snapshot keys is stored instack.provenance["processing"]["keys_by_name"]. The final processed array overwritesstack.data, and environment metadata is captured once instack.provenance["environment"].- add_filter(filter_name: str, **kwargs) ProcessingPipeline[source]¶
Add a filter step to the pipeline.
- Parameters:
filter_name (str) – The name of the registered filter function to apply.
**kwargs – Additional keyword arguments for the filter function.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
If a mask is currently active, the pipeline will attempt to use a masked version of the filter (from MASK_FILTERS_MAP) if available. Otherwise, the unmasked filter is applied to the whole dataset.
- add_mask(mask_name: str, **kwargs) ProcessingPipeline[source]¶
Add a masking step to the pipeline.
- Parameters:
mask_name (str) – The name of the registered mask function to apply.
**kwargs – Additional parameters passed to the mask function.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
If a mask is currently active (i.e. not cleared), this new mask will be logically combined (ORed) with the existing one.
- clear_mask() ProcessingPipeline[source]¶
Add a step to clear the current mask.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
Calling this resets the masking state, so subsequent filters will be applied to the entire dataset unless a new mask is added.
- run() ndarray[source]¶
Execute configured steps on the AFMImageStack, storing outputs and provenance.
The pipeline iterates through all added masks, filters, and plugins in order, applying each to the current data. Masks are combined if multiple are applied before a filter. Each step’s output is stored in stack.processed (filters) or stack.masks (masks), and a detailed provenance record is saved in stack.provenance[“processing”].
Behavior¶
1. Record or update environment metadata via
gather_environment_info()intostack.provenance["environment"].2. Reset previous processing provenance under
stack.provenance["processing"], ensuring that keys"steps"(a list) and"keys_by_name"(a dictionary) exist and are cleared.3. If not already present, snapshot the original data as
"raw"instack.processed.Iterate over
self.stepsin order (1-based index):
Resolve the step type via
stack._resolve_step(step_name), which returns a tuple of the form (step_type,fn).Record a timestamp (from
utc_now_iso()), index, name, parameters, step type, function version (fromfn.__version__or plugin lookup), and module name.- If
step_typeis"clear": Reset the current mask to
None.Record
"mask_cleared": Truein the provenance entry.
- If
- If
step_typeis"mask": Call
stack._execute_mask_step(fn, arr, **kwargs)to compute a boolean mask array.If there is no existing mask, store it under a new key
step_<idx>_<mask_name>instack.masks.Otherwise, overlay it with the previous mask (logical OR) under a derived key.
Update the current mask and record
"mask_key"and"mask_summary"in provenance.
- If
- Else (filter/method/plugin):
Call
stack._execute_filter_step(fn, arr, mask, step_name, **kwargs)to obtain the new array.Store the result under
stack.processed["step_<idx>_<safe_name>"]and updatearr.Record
"processed_key"and"output_summary"in provenance.
After all steps, overwrite
stack.datawitharr.
6. Build
stack.provenance["processing"]["keys_by_name"], mapping each step name to the list of stored keys (processed_keyormask_key) in order.Return the final processed array.
- returns:
The final processed data array, now also stored in stack.data.
- rtype:
np.ndarray
- raises RuntimeError:
If a step cannot be resolved or executed due to misconfiguration.
- raises ValueError:
If overlaying a mask fails due to missing previous mask key (propagated).
- raises Exception:
Any exception raised by a step function is logged and re-raised.
Notes
The method ensures a raw copy of the original stack exists under stack.processed[“raw”].
Mask steps may be overlaid with previous masks using logical OR.
Non-drop_frames stack_edit steps automatically delegate to drop_frames to maintain provenance consistency.