Talk:Auto-zero/Auto-calibration
From Wikimization
(New section: Trial: Poysnomial expressions) |
|||
Line 43: | Line 43: | ||
Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this? | Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this? | ||
+ | |||
+ | == Abstract form of Likelihood case == | ||
+ | Setup: | ||
+ | |||
+ | <math>P\,</math> a n-dimensional collection of Gaussian probability distribution | ||
+ | functions. This is a condition I want to liberalize to any convex | ||
+ | PDF (and in some sense to a uniform PDF). | ||
+ | |||
+ | <math>p\in P\,</math> ; <math>\bar{p}\,</math> PDF of <math>p\,</math> | ||
+ | |||
+ | General formula: <math>y=f(x;p)\,</math> | ||
+ | |||
+ | Calibration pair <math>[y_{c},x_{c}]\,</math> constraining <math>p\,</math> : <math>y_{c}=f(x_{c};p)\,</math> | ||
+ | and consequently forming a new PDF <math>\bar{p'}\,</math> with <math>p'\,</math> the constrained <math>p\,</math>. | ||
+ | |||
+ | <math>y_{t}=f(x_{t};p')\,</math> | ||
+ | |||
+ | '''Clearly given <math>y_{t}\,</math> fixed, <math>x_{t}\,</math> has a PDF. <math>\bar{x_{t}}=\bar{x}_{t}\left(y_{t},y_{c},x_{c},\bar{p}\right)=\bar{x}_{t}\left(y_{t},\bar{p'}\right)\,</math>''' | ||
+ | |||
+ | Problem 1: mode | ||
+ | |||
+ | maximize: The most likely value of <math>\bar{x}_{t}\,</math> | ||
+ | |||
+ | (i.e. <math>\frac{dx_{t}}{dp'}=0\,</math> for differentiable functions) | ||
+ | |||
+ | wrt : <math>p'\,</math> | ||
+ | |||
+ | given <math>y_{t},y_{c},x_{c},\bar{x_{t}},y=f(x;p)\,</math> | ||
+ | or alternately <math>y_t,\bar{x_{t}},y_t=f(x_t;p')\,</math> |
Revision as of 08:59, 12 September 2010
I have been working on creating a robust design structure for the design of Auto-Zero/Auto-calibration implementations. I have a lot of moving parts in my head; but I believe I need outside viewpoints and knowledge in order to construct a general approach. If anybody is interested please respond here. It is a bit more complicated than it would seem on the surface IMHO. I somewhat think it falls within convex optimization. On the other hand I sometimes think it doesn't. I do have a particular example that illustrates the various problems that can arise. Although the ideas should be applicable to Scientific measurements; the applications I have in mind relate to autonomous embedded software and hardware implementations.
Ray
Note on the examples: I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing. Ray
Trial: Poysnomial expressions
Thus the expression for is
Keeping the new variable we have the following constraint
The denominator of can be expressed as
Note a sign change this is complimented in the denominator.
Note that due to the circuit physics for all errors
The expression for is
With the constraint
Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?
Abstract form of Likelihood case
Setup:
a n-dimensional collection of Gaussian probability distribution functions. This is a condition I want to liberalize to any convex PDF (and in some sense to a uniform PDF).
; PDF of
General formula:
Calibration pair constraining : and consequently forming a new PDF with the constrained .
Clearly given fixed, has a PDF.
Problem 1: mode
maximize: The most likely value of
(i.e. for differentiable functions)
wrt :
given or alternately