A plant manufactures weights of mass: 1 kilogram.
There is an inherent variability of the weights' mass on the production line that can be characterized using the process mean, assumed to be 1 kilogram, and standard deviation sd1. The manufacture variability is assumed to be normally-distributed.
A product tolerance interval t1 is set so that those samples measured outside the interval are rejected (out of specification) -> Manufacturing yield.
The remaining "good" weights are then each coupled with a digital scale as reference weights, and used to verify the calibration of the scale throughout its lifetime.
The scale reading is also subject to variability, its noise is assumed to be normally-distributed and characterized by standard deviation sd2.
There is a built-in functionality in the scale, whereby a tolerance interval t2 (system tolerance) is pre-set so that a reading of the reference weight falling outside of tolerance would generate a "scale uncalibrated" error.
Upon pairing a scale with a reference weight, a single reading r is performed on each scale. An error rate e is determined as the statistical likelihood of this first reading of the weight on its scale falling outside of the system tolerance interval k2 (i.e number of failures / number of reads).
Using variables sd1 and sd2, is it possible to derive a statistical correlation between the manufacturing yield as defined by t1, the system tolerance t2, and the error rate e?
For example, if we set t2 as +/- 3 * sd2 , and t1 as +/- 2 * sd1, can the statistical error rate e be evaluated? Similarly, can the manufacturing tolerance interval t1 and error rate e be pre-set to derive t2, or is the system underdetermined with the available variables?