-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
acquisition.Probe noise model #39
Comments
@ogurreck I'm more of a computer vision and computational geometry person, so thanks for your patience while I try and understand the details of your improved noise model.
Your proposed noise model makes sense to me as far as providing the correct noise distribution by converting
I agree. Intersection calculations will take a majority of the computation time.
In your improved model, would |
I was suggesting newdata to be in the range [0, 3] because this range is statistically sound. In the xdesign nomenclature,
The standard deviation of the detector count would be |
It seems like your noise model depends on having all of the data available (otherwise you can't know the maximum value in order to choose appropriate scaling factor). Thus it would probably be better to add noise as a post-processing method instead of in Otherwise, can you think of a way such that the noise can be computed without knowing in advance what the maximum value of the sinogram (or other data scheme) will be? |
Ideally one can use Poisson noise, and add it on-the-fly, but then another scaling factor should be provided for the source (e.g. counts, detector, readout, etc.). Maybe best is to add flux as a property of probe. |
True on both accounts. Therefore, I suggested using real world units. That would allow a proper estimate of how noisy reconstructed data would be and what type of features are still discernable.
As we are dealing with large numbers of counts on the detector (N >> 1), you could approximate the Poisson noise with a normal distribution. That should give about the same effect.
As written above, I don't think one can reasonably add Poisson noise on the fly without knowing all the parameters (proper attenuation coefficients) because these scale the distribution function. For on-the-fly noise generation, I would suggest using a normal distribution, which behaves quite similarly for large count numbers. |
I propose the following patch for now: def measure(self, phantom, sigma=0):
"""Measure the phantom with optional Gaussian noise.
sigma is the standard deviation of the normally distributed noise.
"""
[...]
if sigma > 0:
newdata += newdata * np.random.normal(scale=sigma)
self.record()
return newdata And sometime in the future, a more accurate noise model and I think we want to refrain from modeling anything detector dependent such as count scaling in the |
Your proposed patch should fix the issue quite well.
If you would like and help or input on that, please get back to me. I'd be happy to help. |
@ogurreck proposed a more accurate noise model for data acquisition in #39. This is a temporary patch which replaces the current inaccurate model with a normally distributed noise to the measurement. A more accurate noise model is proposed for a later time.
@ogurreck proposed a more accurate noise model for data acquisition in #39. This is a temporary patch which replaces the current inaccurate model with a normally distributed noise. A more accurate noise model is proposed for a later time.
The noise parameter in the sinograms only increases the absorption, never decreases it, see image:
This introduces a statistical bias. I think the reason for this behaviour is that noise is added for the attenuation data and not for the measured data
The way a "normal" tomography measurement works is by dividing (dark-current normalized) camera images: projection divided by flat-field. These images both have Poisson-distributed counting errors.
The variance variance of the Poisson distribution equals lambda:
VAR = sigma^2 =lambda
For a noise level equivalent to one standard deviation (i.e. sigma =
noise
), this corresponds to a count rate of counts = noise^(-2)I would suggest the noise application to be done in the following manner:
This approach creates a nice and uniform histogram of the noise around the expected value.
However, this approach would require that newdata is scaled reasonable, i.e. ideally newdata would be in [0, 3] and would support using "real world units".
It is obviously also more computational intensive than the old approach but considering the time required for computing all phantom intersections, I expect this to be negligible.
This is an example of how the histograms look like (500,000 random number pairs) with a transmission value of 0.25 (
exp(-newdata) = 0.25
):The text was updated successfully, but these errors were encountered: