playnano.processing package¶
Submodules¶
playnano.processing.core module¶
Core functions for loading and processing AFMImageStacks.
- playnano.processing.core.process_stack(input_path: Path, channel: str, steps: List[Tuple[str, Dict]]) AFMImageStack[source]¶
Load an AFMImageStack from a file, apply a list of processing steps, and return it.
- Parameters:
input_path (Path) – Path to the AFM stack file.
channel (str) – Channel to load (e.g., ‘h’, ‘z’, etc.).
steps (list of tuple) – List of processing steps in the form (step_name, kwargs). Special step_name values: - “clear” : clears the current mask - “mask” : applies a mask function with kwargs - otherwise : treated as a filter name with kwargs
- Returns:
The processed AFMImageStack.
- Return type:
- Raises:
LoadError – If the AFM stack cannot be loaded from input_path.
playnano.processing.filters module¶
Module for applying flattening and filtering to AFM images in Numpy arrays.
- playnano.processing.filters.gaussian_filter(data: ndarray, sigma: float = 1.0) ndarray[source]¶
Apply a Gaussian low-pass filter to smooth high-frequency noise.
- Parameters:
data (np.ndarray) – 2D AFM image data.
sigma (float) – Standard deviation for Gaussian kernel, in pixels.
- Returns:
Smoothed image.
- Return type:
np.ndarray
- playnano.processing.filters.polynomial_flatten(data: ndarray, order: int = 2) ndarray[source]¶
Subtract a 2D polynomial surface of given order to flatten AFM image data.
- Parameters:
data (np.ndarray) – 2D AFM image data.
order (int) – Polynomial order for surface fitting (e.g., 1 for linear, 2 for quadratic).
- Returns:
Flattened image with polynomial background removed.
- Return type:
np.ndarray
- Raises:
ValueError – If data is not a 2D array or if order is not a positive integer.
- playnano.processing.filters.remove_plane(data: ndarray) ndarray[source]¶
Fit a 2D plane to the image using linear regression and subtract it.
Uses a 2D plane (z = ax + by + c) to remove to remove overall tilt.
- Parameters:
data (np.ndarray) – 2D AFM image data.
- Returns:
Plane-removed image.
- Return type:
np.ndarray
- playnano.processing.filters.row_median_align(data: ndarray) ndarray[source]¶
Subtract the median of each row from that row to remove horizontal banding.
- Parameters:
data (np.ndarray) – 2D AFM image data.
- Returns:
Row-aligned image.
- Return type:
np.ndarray
- playnano.processing.filters.zero_mean(data: ndarray) ndarray[source]¶
Subtract the overall mean height to center data around zero.
- Parameters:
data (np.ndarray) – 2D AFM image data.
mask (np.ndarray, optional) – Boolean mask of same shape as data; True indicates region to exclude from mean.
- Returns:
Zero-mean image.
- Return type:
np.ndarray
playnano.processing.mask_generators module¶
Module for masking features of AFM images in Numpy arrays.
- playnano.processing.mask_generators.mask_adaptive(data: ndarray, block_size: int = 15, offset: float = 0.0) ndarray[source]¶
Adaptive local mean threshold per block.
- Parameters:
- Returns:
Boolean mask array where True indicates pixels above the threshold.
- Return type:
np.ndarray
- playnano.processing.mask_generators.mask_below_threshold(data: ndarray, threshold: float = 0.0) ndarray[source]¶
Mask where data < threshold.
- Parameters:
data (numpy.ndarray) – Input 2D array.
threshold (float, optional) – Threshold value. Pixels less than this will be True. Default is 0.0.
- Returns:
Boolean mask array.
- Return type:
np.ndarray
- playnano.processing.mask_generators.mask_mean_offset(data: ndarray, factor: float = 1.0) ndarray[source]¶
Mask values greater than mean plus factor * standard deviation.
- Parameters:
data (numpy.ndarray) – Input 2D array.
factor (float, optional) – Factor multiplied by the standard deviation to define the threshold. Default is 1.0.
- Returns:
Boolean mask array.
- Return type:
np.ndarray
- playnano.processing.mask_generators.mask_morphological(data: ndarray, threshold: float = 0.0, structure_size: int = 3) ndarray[source]¶
Apply threshold and morphological closing to mask foreground.
- playnano.processing.mask_generators.mask_threshold(data: ndarray, threshold: float = 0.0) ndarray[source]¶
Mask where data > threshold.
- Parameters:
data (numpy.ndarray) – Input 2D array.
threshold (float, optional) – Threshold value. Pixels greater than this will be True. Default is 0.0.
- Returns:
Boolean mask array.
- Return type:
np.ndarray
playnano.processing.masked_filters module¶
Module for filtering AFM data in NumPy arrays with a boolean mask.
- playnano.processing.masked_filters.polynomial_flatten_masked(data: ndarray, mask: ndarray, order: int = 2) ndarray[source]¶
Fit a 2D polynomial using background (mask==False) and subtract it.
- Parameters:
data (np.ndarray) – 2D AFM image.
order (int) – Polynomial order. Default order=2.
mask (np.ndarray) – Boolean mask of same shape; True=foreground, False=background.
- Returns:
Polynomial-flattened image.
- Return type:
np.ndarray
- Raises:
ValueError – If mask.shape != data.shape or order is not a positive integer.
- playnano.processing.masked_filters.remove_plane_masked(data: ndarray, mask: ndarray) ndarray[source]¶
Fit a 2D plane on background only and subtract it from the full image.
- Parameters:
data (np.ndarray) – 2D AFM image.
mask (np.ndarray) – Boolean mask of same shape; True=foreground (excluded), False=background (used to fit).
- Returns:
Plane-removed image.
- Return type:
np.ndarray
- Raises:
ValueError – If mask.shape != data.shape.
- playnano.processing.masked_filters.row_median_align_masked(data: ndarray, mask: ndarray) ndarray[source]¶
Compute each row’s median using background pixels and subtract from each full row.
- Parameters:
data (np.ndarray) – 2D AFM image.
mask (np.ndarray) – Boolean mask of same shape; True=foreground, False=background.
- Returns:
Row-masked-alignment image.
- Return type:
np.ndarray
- Raises:
ValueError – If mask.shape != data.shape.
- playnano.processing.masked_filters.zero_mean_masked(data: ndarray, mask: ndarray = None) ndarray[source]¶
Subtract the overall mean height to center the background around zero.
If a mask is provided, mean is computed only over background (mask == False).
- Parameters:
data (np.ndarray) – 2D AFM image data.
mask (np.ndarray, optional) – Boolean mask of same shape as data; True indicates region to exclude from mean.
- Returns:
Zero-mean image.
- Return type:
np.ndarray
playnano.processing.pipeline module¶
Module containing the ProcessingPipeline class for AFMImageStack processing.
This module provides ProcessingPipeline, which runs a sequence of mask/filter/method/plugin steps on an AFMImageStack. Each step’s output is stored in stack.processed (for filters) or stack.masks (for masks), and detailed provenance (timestamps, parameters, step type, version info, keys) is recorded in stack.provenance[“processing”]. Environment metadata at pipeline start is recorded in stack.provenance[“environment”].
- class playnano.processing.pipeline.ProcessingPipeline(stack: AFMImageStack)[source]¶
Bases:
objectOrchestrates a sequence of masking and filtering steps on an AFMImageStack.
This pipeline records outputs and detailed provenance for each step. Each step is specified by a name and keyword arguments:
"clear": resets any active mask.Mask steps: compute boolean masks stored in
stack.masks[...].Filter/method/plugin steps: apply to the current data (and mask if present), storing results in
stack.processed[...].
Provenance for each step, including index, name, parameters, timestamp, step type, version, keys, and summaries, is appended to
stack.provenance["processing"]["steps"]. Additionally, a mapping from step name to a list of snapshot keys is stored instack.provenance["processing"]["keys_by_name"]. The final processed array overwritesstack.data, and environment metadata is captured once instack.provenance["environment"].- add_filter(filter_name: str, **kwargs) ProcessingPipeline[source]¶
Add a filter step to the pipeline.
- Parameters:
filter_name (str) – The name of the registered filter function to apply.
**kwargs – Additional keyword arguments for the filter function.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
If a mask is currently active, the pipeline will attempt to use a masked version of the filter (from MASK_FILTERS_MAP) if available. Otherwise, the unmasked filter is applied to the whole dataset.
- add_mask(mask_name: str, **kwargs) ProcessingPipeline[source]¶
Add a masking step to the pipeline.
- Parameters:
mask_name (str) – The name of the registered mask function to apply.
**kwargs – Additional parameters passed to the mask function.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
If a mask is currently active (i.e. not cleared), this new mask will be logically combined (ORed) with the existing one.
- clear_mask() ProcessingPipeline[source]¶
Add a step to clear the current mask.
- Returns:
The pipeline instance (for method chaining).
- Return type:
Notes
Calling this resets the masking state, so subsequent filters will be applied to the entire dataset unless a new mask is added.
- run() ndarray[source]¶
Execute configured steps on the AFMImageStack, storing outputs and provenance.
The pipeline iterates through all added mask, filter, and plugin steps in order, applying each to the current data. Masks are combined if multiple are applied before a filter. Each step’s output is stored in stack.processed (filters) or stack.masks (masks), and a detailed provenance record is saved in stack.provenance[“processing”].
Behavior¶
1. Record or update environment metadata via
gather_environment_info()intostack.provenance["environment"].2. Reset previous processing provenance under
stack.provenance["processing"], ensuring that keys"steps"(a list) and"keys_by_name"(a dictionary) exist and are cleared.3. If not already present, snapshot the original data as
"raw"instack.processed.4. Iterate over
self.stepsin order (1-based index) with_run_single_step(...):Resolve the step type via
stack._resolve_step(step_name), which returns a tuple of the form (step_type,fn).Record a timestamp (from
utc_now_iso()), index, name, parameters, step type, function version (fromfn.__version__or plugin lookup), and module name.- If
step_typeis"clear": Reset the current mask to
None.Record
"mask_cleared": Truein the provenance entry.
- If
- If
step_typeis"mask": Call
stack._execute_mask_step(fn, arr, **kwargs)to compute a boolean mask array.If there is no existing mask, store it under a new key
step_<idx>_<mask_name>instack.masks.Otherwise, overlay it with the previous mask (logical OR) under a derived key.
Update the current mask and record
"mask_key"and"mask_summary"in provenance.
- If
- Elif
step_typeisfilter/method/plugin): Call
stack._execute_filter_step(fn, arr, mask, step_name, **kwargs)to obtain the new array.Store the result under
stack.processed["step_<idx>_<safe_name>"]and updatearr.Record
"processed_key"and"output_summary"in provenance.
- Elif
- Elif
step_typeisvideo_filter/video_plugin): Call
stack._execute_video_processing_step(fn, arr, **kwargs)to obtain the new array.Store the result under
stack.processed["step_<idx>_<safe_name>"]and updatearr.Record
"processed_key"and"output_summary"in provenance.
- Elif
- Elif
step_typeisstack_edit: If the step name is
drop_frames, call it directly to get the new array.Otherwise, call the stack edit function to get indices to drop, then delegate to the
drop_framesfunction to perform the edit. This ensures that all stack edits are recorded in a consistent way in provenance.Store the result under
stack.processed["step_<idx>_drop_frames"]and updatearr.
- Elif
Else raise an warning for unrecognized step type.
After all steps, overwrite
stack.datawitharr.
6. Build
stack.provenance["processing"]["keys_by_name"], mapping each step name to the list of stored keys (processed_keyormask_key) in order.Return the final processed array.
- returns:
The final processed data array, now also stored in stack.data.
- rtype:
np.ndarray
- raises RuntimeError:
If a step cannot be resolved or executed due to misconfiguration.
- raises ValueError:
If overlaying a mask fails due to missing previous mask key (propagated).
- raises Exception:
Any exception raised by a step function is logged and re-raised.
Notes
The method ensures a raw copy of the original stack exists under stack.processed[“raw”].
Mask steps may be overlaid with previous masks using logical OR.
Non-drop_frames stack_edit steps automatically delegate to drop_frames to maintain provenance consistency.
playnano.processing.stack_edit module¶
Functions for editing AFM image stacks by removing or selecting frames.
These functions are designed to be called by the ProcessingPipeline, which handles provenance tracking and updates to the AFMImageStack object. The functions here operate purely on data arrays or frame index lists.
Only ‘drop_frames’ performs actual stack edits. Other registered stack_edit functions return indices to drop, which are then passed to ‘drop_frames’ to ensure consistent provenance tracking.
Functions¶
drop_frames : Remove specific frames from a 3D array.
drop_frame_range : Generate a list of frame indices to drop within a given range.
select_frames : Generate a list of frame indices to drop, keeping only the selected frames.
- playnano.processing.stack_edit.drop_frame_range(data: ndarray, start: int, end: int) list[int][source]¶
Generate indices to drop within a given range of frames.
- Parameters:
- Returns:
List of indices that should be dropped.
- Return type:
- Raises:
ValueError – If the range is invalid or out of bounds.
- playnano.processing.stack_edit.drop_frames(data: ndarray, indices_to_drop: list[int]) ndarray[source]¶
Remove specific frames from a 3D array.
- Parameters:
- Returns:
New array with the specified frames removed.
- Return type:
np.ndarray
- Raises:
ValueError – If any provided indices are out of bounds or if data is not 3D.
Notes
The function does not modify the input array in place.
The ProcessingPipeline is responsible for updating metadata and provenance.
- playnano.processing.stack_edit.register_stack_edit_processing() dict[str, Callable][source]¶
Return a dictionary of registered stack editing processing filters.
Keys are names of the operations, values are the functions themselves. drop_frames is the operational function takes a 3D stack (n_frames, H, W) and a list of indices and returns a ndarray. drop_frame_range and select_frames are helper functions that return lists of indices to drop which can be passed to drop_frames.
- playnano.processing.stack_edit.select_frames(data: ndarray, keep_indices: list[int]) list[int][source]¶
Generate a list of frame indices to drop, keeping only the selected frames.
- Parameters:
- Returns:
List of frame indices that should be dropped.
- Return type:
- Raises:
ValueError – If keep_indices contains out-of-range values.
playnano.processing.video_processing module¶
Video processing functions for AFM time-series (stacks of frames).
This module provides functions that operate on 3D numpy arrays (time-series of 2D AFM frames). These include:
Frame alignment to compensate for drift
Cropping and padding utilities
Temporal (time-domain) filters
Future extensions such as spatio-temporal denoising
All functions follow a NumPy-style API: input stacks are 3D arrays with shape (n_frames, height, width). Outputs are processed stacks and a metadata dictionary.
- playnano.processing.video_processing.align_frames(stack: ndarray, reference_frame: int = 0, method: str = 'fft_cross_correlation', mode: str = 'pad', debug: bool = False, max_shift: int | None = None, pre_filter_sigma: float | None = None, max_jump: int | None = None)[source]¶
Align a stack of AFM frames to a reference frame using integer-pixel shifts.
Alignment is performed using either FFT-based or full cross-correlation. Jump smoothing prevents abrupt unrealistic displacements between consecutive frames by limiting the change in shift relative to the previous frame.
- Parameters:
stack (np.ndarray[float]) – 3D array of shape (n_frames, height, width) containing the input AFM image stack.
reference_frame (int, optional) – Index of the frame to use as the alignment reference (default 0). Must be within [0, n_frames-1].
method ({"fft_cross_correlation", "full_cross_correlation"}, optional) – Alignment method (default “fft_cross_correlation”). FFT-based cross-correlation is generally faster and uses less memory for large frames.
mode ({"pad", "crop", "crop_square"}, optional) – How to handle borders after shifting: - “pad”: keep all frames with NaN padding (default) - “crop”: crop to intersection of all frames - “crop_square”: crop to largest centered square
debug (bool, optional) – If True, returns additional diagnostic outputs.
max_shift (int, optional) – Maximum allowed shift in pixels. Detected shifts are clipped to this range.
pre_filter_sigma (float, optional) – Standard deviation of Gaussian filter applied to frames before cross-correlation.
max_jump (int, optional) – Maximum allowed change in shift between consecutive frames. If exceeded, the shift is replaced by a linear extrapolation from the previous two frames.
- Returns:
aligned_stack (np.ndarray[float]) – Aligned 3D stack of frames. Shape may be larger than input to accommodate all shifts.
metadata (dict) – Dictionary containing alignment information: - “reference_frame”: int, index of the reference frame - “method”: str, the alignment method used - “mode”: str, border approach used - “shifts”: np.ndarray of shape (n_frames, 2), detected (dy, dx) shifts - “original_shape”: tuple of (height, width) - “aligned_shape”: tuple of (height, width) of the output canvas - “border_mask”: np.ndarray[bool], True where valid frame pixels exist - “pre_filter_sigma”: float or None - “max_shift”: int or None - “max_jump”: int or None
debug_outputs (dict, optional) – Returned only if
debug=True. Contains: - “shifts”: copy of the shifts array.
- Raises:
ValueError – If
stack.ndimis not 3.ValueError – If
methodis not one of {“fft_cross_correlation”, “full_cross_correlation”}.ValueError – If
reference_frameis not in the range [0, n_frames-1].
Notes
Using
fft_cross_correlationreduces memory usage compared to full
cross-correlation because it leverages the FFT algorithm and avoids creating large full correlation matrices. - Padding with NaNs allows all frames to be placed without clipping, but may increase memory usage for large shifts. - The function does not interpolate subpixel shifts; all shifts are integer-valued.
Examples
>>> import numpy as np >>> from playnano.processing.video_processing import align_frames >>> stack = np.random.rand(10, 200, 200) # 10 frames of 200x200 pixels >>> aligned_stack, metadata = align_frames(stack, reference_frame=0) >>> aligned_stack.shape (10, 210, 210) # padded to accommodate shifts >>> metadata['shifts'] array([[ 0, 0], [ 1, -2], ...])
- playnano.processing.video_processing.crop_square(stack: ndarray, pad=0) tuple[ndarray, dict][source]¶
Crop aligned stack to the largest centered square region.
This is based on the finite-pixel intersection across frames, with optional outward padding (np.nan).
- Parameters:
- Returns:
cropped (ndarray) – Cropped (and possibly padded) square stack.
meta (dict) – Metadata including original shape, intersection shape, square size, bounds, padding details, and offset compatible with the original function (offset within the intersection crop).
- playnano.processing.video_processing.intersection_crop(stack: ndarray, pad=0) tuple[ndarray, dict][source]¶
Crop aligned stack to the largest common intersection region (finite across frames).
Option to add padding to expand the crop beyond the intersection, filling with NaN when beyond the data.
- Parameters:
- Returns:
cropped (ndarray) – Cropped (and possibly padded) stack.
meta (dict) – Metadata including original shape, intersection bounds, requested bounds, actual padding applied, and new shape.
- playnano.processing.video_processing.register_video_processing() dict[str, Callable][source]¶
Return a dictionary of registered video processing filters.
Keys are names of the operations, values are the functions themselves. These functions should take a 3D stack (n_frames, H, W) and return either an ndarray (filtered stack) or a tuple (stack, metadata).
- playnano.processing.video_processing.replace_nan(stack: ndarray, mode: Literal['zero', 'mean', 'median', 'global_mean', 'constant'] = 'zero', value: float | None = None) tuple[ndarray, dict][source]¶
Replace NaN values in a 2D frame or 3D AFM image stack using various strategies.
Primarily used in video pipelines after alignment, but also applicable to single frames.
- Parameters:
stack (np.ndarray) – Input 3D array of shape (n_frames, height, width) or 2D frame (height, width) that may contain NaN values.
mode ({"zero", "mean", "median", "global_mean", "constant"}, optional) – Replacement strategy. Default is “zero”. - “zero” : Replace NaNs with 0. - “mean” : Replace NaNs with the mean of each frame. - “median” : Replace NaNs with the median of each frame. - “global_mean” : Replace NaNs with the mean of the entire stack. - “constant” : Replace NaNs with a user-specified constant value.
value (float, optional) – Constant value to use when mode=”constant”. Must be provided in that case.
- Returns:
filled (np.ndarray) – Stack of the same shape as stack with NaNs replaced according to mode.
meta (dict) – Metadata about the NaN replacement operation (e.g., count, mode, constant used).
- Raises:
ValueError – If mode is unknown or if mode=”constant” and value is not provided.
Notes
Frame-wise operations like “mean” and “median” compute statistics per frame independently.
Preserves the dtype of the input stack.
- playnano.processing.video_processing.rolling_frame_align(stack: ndarray, window: int = 5, mode: str = 'pad', debug: bool = False, max_shift: int | None = None, pre_filter_sigma: float | None = None, max_jump: int | None = None)[source]¶
Align a stack of AFM frames using a rolling reference and integer pixel shifts.
This function computes frame-to-frame shifts relative to a rolling reference (average of the last window aligned frames) using phase cross-correlation. Each frame is then placed on a canvas large enough to accommodate all shifts. Optional jump smoothing prevents sudden unrealistic displacements between consecutive frames, and optional Gaussian pre-filtering can improve correlation robustness for noisy data.
- Parameters:
stack (np.ndarray[float]) – 3D array of shape (n_frames, height, width) containing the image frames.
window (int, optional) – Number of previous aligned frames to average when building the rolling reference. Default is 5.
mode ({"pad", "crop", "crop_square"}, optional) – How to handle borders after shifting: - “pad”: keep all frames with NaN padding (default) - “crop”: crop to intersection of all frames - “crop_square”: crop to largest centered square
debug (bool, optional) – If True, returns additional diagnostic outputs such as the rolling reference frames. Default is False.
max_shift (int, optional) – Maximum allowed shift in pixels along either axis. Detected shifts are clipped. Default is None (no clipping).
pre_filter_sigma (float, optional) – Standard deviation of Gaussian filter applied to both reference and moving frames prior to cross-correlation. Helps reduce noise. Default is None.
max_jump (int, optional) – Maximum allowed jump in pixels between consecutive frame shifts. If exceeded, the shift is replaced by a linear extrapolation from the previous two shifts. Default is None (no jump smoothing).
- Returns:
aligned_stack (np.ndarray[float]) – 3D array of shape (n_frames, canvas_height, canvas_width) containing the aligned frames. NaN values indicate areas outside the original frames after alignment.
metadata (dict) – Dictionary containing alignment information: - “window”: int, rolling reference window used - “method”: str, alignment method used - “mode”: str, border approach used - “shifts”: ndarray of shape (n_frames, 2), detected integer shifts (dy, dx) - “original_shape”: tuple of (height, width) - “aligned_shape”: tuple of (canvas_height, canvas_width) - “border_mask”: ndarray of shape (canvas_height, canvas_width), True where
valid pixels exist
”pre_filter_sigma”: float or None
”max_shift”: int or None
”max_jump”: int or None
debug_outputs (dict, optional) – Returned only if debug=True. Contains: - “shifts”: copy of the detected shifts array - “aligned_refs”: deque of indices used for rolling reference
- Raises:
ValueError – If
stack.ndimis not 3.ValueError – If
window< 1.
Notes
The rolling reference is computed using the last window aligned frames, ignoring NaN pixels.
Shifts are integer-valued; no subpixel interpolation is performed.
Padding ensures all frames fit without clipping, but increases memory usage.
Internally, a deque
aligned_refstracks which patches of which frames contribute to the rolling reference. Each entry stores:(frame_index, y0c, y1c, x0c, x1c, fy0, fy1, fx0, fx1),
i.e. both the region of the canvas updated and the corresponding slice in the original frame. This allows exact removal of old contributions from
rolling_sumandrolling_countwhen the window is exceeded, ensuring consistency without recomputation.
Examples
>>> import numpy as np >>> from playnano.processing.video_processing import rolling_frame_align >>> stack = np.random.rand(10, 200, 200) # 10 frames of 200x200 pixels >>> aligned_stack, metadata = rolling_frame_align(stack, window=3) >>> aligned_stack.shape (10, 210, 210) >>> metadata['shifts'] array([[0, 0], [1, -1], ...])
- playnano.processing.video_processing.temporal_mean_filter(stack: ndarray, window: int = 3) ndarray[source]¶
Apply mean filter across the time dimension.
- Parameters:
stack (ndarray of shape (n_frames, height, width)) – Input stack.
window (int, optional) – Window size (number of frames). Default is 3.
- Returns:
filtered – Stack after temporal mean filtering.
- Return type:
ndarray of shape (n_frames, height, width)
- playnano.processing.video_processing.temporal_median_filter(stack: ndarray, window: int = 3) ndarray[source]¶
Apply median filter across the time dimension.
- Parameters:
stack (ndarray of shape (n_frames, height, width)) – Input stack.
window (int, optional) – Window size (number of frames). Default is 3.
- Returns:
filtered – Stack after temporal median filtering.
- Return type:
ndarray of shape (n_frames, height, width)
Module contents¶
Public package initialization.