Observer module#
Fov Angles#
- class cherab.iter.observer.fov_angles.FoV(view, phi_step=0.025, theta_step=0.025)#
Diagnostic field of view Selects rays for a given line of sight (LoS). Returns ray’s indeces.
- Parameters:
- Variables:
- calc_coordination_system_view()#
calculate coordinate system view using ray corners coords.
- euler()#
Returns (yaw, pitch, roll) for (-Y)*(-X)’Z’’ intrinsic rotation
- create_rays(phi_step=0.025, theta_step=0.025)#
create rays
Fov Camera#
- class cherab.iter.observer.fov_camera.FovCamera(pixels, fov=(90.0, 90.0), sensitivity=1.0, frame_sampler=None, pipelines=None, **kewargs)#
Bases:
Observer2D
Angular field-of-view observer.
A camera that launches rays from the observer’s origin point over a specified field of view in spherical coordinates. Unlike pinhole camera, each pixel of the final image represents a solid angle of collection, not a rectangle on the plane.
- Parameters:
pixels (tuple[int, int]) – A tuple of pixel dimensions for the camera, i.e. (512, 512).
fov (tuple[float, float]) – The field of view of the camera in degrees in horizontal and vertical directions, by default (90., 90.)
sensitivity (float) – The sensitivity of each pixel, by default 1.0
frame_sampler (
FrameSampler2D
) – The frame sampling strategy, defaults to adaptive samplingRGBAdaptiveSampler2D
(i.e. extra samples for noisier pixels).pipelines (list[
Pipeline2D
]) – The list of pipelines that will process the spectrum measured at each pixel by the camera, by defaultRGBPipeline2D
**kwargs (
Observer2D
and _ObserverBase. properties, optional) – kwargs are used to specify properties like a parent, transform, pipelines, etc.
RTM Pipelines#
- class cherab.iter.observer.rtm_pipelines.RtmPipeline0D(name=None)#
Bases:
Pipeline0D
- initialise(min_wavelength, max_wavelength, spectral_bins, spectral_slices, quiet)#
Initialises containers for the pipeline’s processing output.
The deriving class should use this method to perform any initialisation needed for their calculations.
This is a virtual method and must be implemented in a sub class.
- Parameters:
min_wavelength (float) – The minimum wavelength in the spectral range.
max_wavelength (float) – The maximum wavelength in the spectral range.
spectral_bins (int) – Number of spectral samples across wavelength range.
spectral_slices (list) – List of spectral sub-ranges for cases where spectral rays is > 1 (i.e. dispersion effects turned on).
quiet (bool) – When True, suppresses output to the terminal.
- pixel_processor(slice_id)#
Initialise and return a pixel processor for this pipeline.
Called by worker threads, each worker will request a pixel processor for processing the output of their work from each pipeline.
This is a virtual method and must be implemented in a sub class.
- Parameters:
slice_id (int) – The integer identifying the spectral slice being worked on by the requesting worker thread.
- Return type:
PixelProcessor
- update(slice_id, packed_result, pixel_samples)#
Updates the internal results array with packed results from the pixel processor.
After worker threads have observed the world and used the pixel processor to process the spectra into packed results, the worker then passes the packed results to the pipeline with the update() method.
If this pipeline implements some form of visualisation, update the visualisation at the end of this method.
This is a virtual method and must be implemented in a sub class.
- Parameters:
slice_id (int) – The integer identifying the spectral slice being worked on by the worker thread.
packed_result (tuple) – The tuple of results generated by this pipeline’s PixelProcessor.
samples (int) – The number of samples taken by the worker. Needed for ensuring accuracy of statistical errors when combining new samples with previous results.
- finalise()#
Finalises the results when rendering has finished.
This method is called when all workers have finished sampling and the results need to undergo any final processing.
If this pipeline implements some form of visualisation, use this method to plot the final visualisation of results.
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.rtm_pipelines.RtmPipeline1D(name=None)#
Bases:
Pipeline1D
- initialise(pixels, pixel_samples, min_wavelength, max_wavelength, spectral_bins, spectral_slices, quiet)#
Initialises containers for the pipeline’s processing output.
The deriving class should use this method to perform any initialisation needed for their calculations.
This is a virtual method and must be implemented in a sub class.
- Parameters:
pixels (tuple) – A tuple defining the pixel dimensions being sampled (i.e. (256,)).
pixel_samples (int) – The number of samples being taken per pixel. Needed for statistical calculations.
min_wavelength (float) – The minimum wavelength in the spectral range.
max_wavelength (float) – The maximum wavelength in the spectral range.
spectral_bins (int) – Number of spectral samples across wavelength range.
spectral_slices (list) – List of spectral sub-ranges for cases where spectral rays is > 1 (i.e. dispersion effects turned on).
quiet (bool) – When True, suppresses output to the terminal.
- pixel_processor(pixel, slice_id)#
Initialise and return a pixel processor for this pipeline and pixel coordinate.
Called by worker threads, each worker will request a pixel processor for the pixel they are processing.
This is a virtual method and must be implemented in a sub class.
- update(pixel, slice_id, packed_result)#
Updates the internal results array with packed results from the pixel processor.
After worker threads have observed the world and used the pixel processor to process the spectra into packed results, the worker then passes the packed results for the current pixel to the pipeline with the update() method. This method should add the results for this pixel to the pipeline’s results array.
If this pipeline implements some form of visualisation, update the visualisation at the end of this method.
This is a virtual method and must be implemented in a sub class.
- finalise()#
Finalises the results when rendering has finished.
This method is called when all workers have finished sampling and the results need to undergo any final processing.
If this pipeline implements some form of visualisation, use this method to plot the final visualisation of results.
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.rtm_pipelines.RtmPipeline2D(name=None)#
Bases:
Pipeline2D
- initialise(pixels, pixel_samples, min_wavelength, max_wavelength, spectral_bins, spectral_slices, quiet)#
Initialises containers for the pipeline’s processing output.
The deriving class should use this method to perform any initialisation needed for their calculations.
This is a virtual method and must be implemented in a sub class.
- Parameters:
pixels (tuple) – A tuple defining the pixel dimensions being sampled (i.e. (512, 512)).
pixel_samples (int) – The number of samples being taken per pixel. Needed for statistical calculations.
min_wavelength (float) – The minimum wavelength in the spectral range.
max_wavelength (float) – The maximum wavelength in the spectral range.
spectral_bins (int) – Number of spectral samples across wavelength range.
spectral_slices (list) – List of spectral sub-ranges for cases where spectral rays is > 1 (i.e. dispersion effects turned on).
quiet (bool) – When True, suppresses output to the terminal.
- pixel_processor(x, y, slice_id)#
Initialise and return a pixel processor for this pipeline and pixel coordinate.
Called by worker threads, each worker will request a pixel processor for the pixel they are currently processing.
This is a virtual method and must be implemented in a sub class.
- Parameters:
- Return type:
PixelProcessor
- update(x, y, slice_id, packed_result)#
Updates the internal results array with packed results from the pixel processor.
After worker threads have observed the world and used the pixel processor to process the spectra into packed results, the worker then passes the packed results for the current pixel to the pipeline with the update() method. This method should add the results for this pixel to the pipeline’s results array.
If this pipeline implements some form of visualisation, update the visualisation at the end of this method.
This is a virtual method and must be implemented in a sub class.
- Parameters:
x (int) – The x pixel coordinate (x, y) of the pixel being sampled by the worker.
y (int) – The y pixel coordinate (x, y) of the pixel being sampled by the worker.
slice_id (int) – The integer identifying the spectral slice being worked on by the worker thread.
packed_result (tuple) – The tuple of results generated by this pipeline’s PixelProcessor.
- finalise()#
Finalises the results when rendering has finished.
This method is called when all workers have finished sampling and the results need to undergo any final processing.
If this pipeline implements some form of visualisation, use this method to plot the final visualisation of results.
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.rtm_pipelines.RtmPixelProcessor(bins)#
Bases:
PixelProcessor
PixelProcessor that stores ray transfer matrix for each pixel.
Sampler#
- class cherab.iter.observer.sampler.SequentialFullFrameSampler1D#
Bases:
FrameSampler1D
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1,), (5,), (512,), …]
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.sampler.SequentialMaskedSampler1D(mask)#
Bases:
FrameSampler1D
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1,), (5,), (512,), …]
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.sampler.SequentialFullFrameSampler2D#
Bases:
FrameSampler2D
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1, 10), (5, 53), (512, 354), …]
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.sampler.SequentialMaskedSampler2D(mask)#
Bases:
FrameSampler2D
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1, 10), (5, 53), (512, 354), …]
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.sampler.SpectralAdaptiveSampler1D(pipeline, fraction=0.2, ratio=10.0, min_samples=1000, cutoff=0.0)#
Bases:
FrameSampler1D
The adaptive sampler to use with spectral pipelines. Based on MonoAdaptiveSampler1D from Raysect
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1,), (5,), (512,), …]
This is a virtual method and must be implemented in a sub class.
- class cherab.iter.observer.sampler.SpectralAdaptiveSampler2D(pipeline, fraction=0.2, ratio=10.0, min_samples=1000, cutoff=0.0)#
Bases:
FrameSampler2D
The adaptive sampler to use with spectral pipelines. Based on MonoAdaptiveSampler2D from Raysect
- generate_tasks(pixels)#
Generates a list of tuples that selects the pixels to render.
Must return a list of tuples where each tuple contains the id of a pixel to render. For example:
tasks = [(1, 10), (5, 53), (512, 354), …]
This is a virtual method and must be implemented in a sub class.