Talk:Auto-zero/Auto-calibration

From Wikimization

(Difference between revisions)
Jump to: navigation, search
Line 43: Line 43:
Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?
Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?
 +
 +
== Abstract form of Likelihood case ==
 +
Setup:
 +
 +
<math>P\,</math> a n-dimensional collection of Gaussian probability distribution
 +
functions. This is a condition I want to liberalize to any convex
 +
PDF (and in some sense to a uniform PDF).
 +
 +
<math>p\in P\,</math> ; <math>\bar{p}\,</math> PDF of <math>p\,</math>
 +
 +
General formula: <math>y=f(x;p)\,</math>
 +
 +
Calibration pair <math>[y_{c},x_{c}]\,</math> constraining <math>p\,</math> : <math>y_{c}=f(x_{c};p)\,</math>
 +
and consequently forming a new PDF <math>\bar{p'}\,</math> with <math>p'\,</math> the constrained <math>p\,</math>.
 +
 +
<math>y_{t}=f(x_{t};p')\,</math>
 +
 +
'''Clearly given <math>y_{t}\,</math> fixed, <math>x_{t}\,</math> has a PDF. <math>\bar{x_{t}}=\bar{x}_{t}\left(y_{t},y_{c},x_{c},\bar{p}\right)=\bar{x}_{t}\left(y_{t},\bar{p'}\right)\,</math>'''
 +
 +
Problem 1: mode
 +
 +
maximize: The most likely value of <math>\bar{x}_{t}\,</math>
 +
 +
(i.e. <math>\frac{dx_{t}}{dp'}=0\,</math> for differentiable functions)
 +
 +
wrt : <math>p'\,</math>
 +
 +
given <math>y_{t},y_{c},x_{c},\bar{x_{t}},y=f(x;p)\,</math>
 +
or alternately <math>y_t,\bar{x_{t}},y_t=f(x_t;p')\,</math>

Revision as of 08:59, 12 September 2010

I have been working on creating a robust design structure for the design of Auto-Zero/Auto-calibration implementations. I have a lot of moving parts in my head; but I believe I need outside viewpoints and knowledge in order to construct a general approach. If anybody is interested please respond here. It is a bit more complicated than it would seem on the surface IMHO. I somewhat think it falls within convex optimization. On the other hand I sometimes think it doesn't. I do have a particular example that illustrates the various problems that can arise. Although the ideas should be applicable to Scientific measurements; the applications I have in mind relate to autonomous embedded software and hardware implementations.

Ray

Note on the examples: I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing. Ray

Trial: Poysnomial expressions

LaTeX: e^{\psi_{x}}=\left(1-\frac{v_{off}}{V_{x}}\right),e^{\psi_{c}}=\left(1-\frac{v_{off}}{V_{c}}\right),e^{\psi_{t}}=\left(1-\frac{v_{off}}{V_{t}}\right),e^{\mathcal{V}_{x}}=V_{x},e^{\mathcal{V}_{c}}=V_{c},e^{\mathcal{V}_{t}}=V_{t}

LaTeX: e^{\mathcal{R}_{d}}=R_{x}+e_{com}+R_{b}+e_{b},e^{\mathcal{R}_{x}}=R_{x},e^{\epsilon_{com}}=e_{com},e^{\mathcal{R}_{b}}=R_{b},e^{\epsilon_{b}}=\left(1-\frac{e_{b}}{R_{t}}\right)

LaTeX: e^{\mathcal{R}_{t}}=R_{t}

LaTeX: e^{\mathcal{V}_{ref}}=V_{ref},e^{\psi_{ref}}=\left(1-\frac{v_{off}}{V_{ref}}\right)

Thus the expression for LaTeX: V_{x} is

LaTeX: e^{\mathcal{V}_{x}}e^{\psi_{x}}=e^{\mathcal{V}_{ref}}e^{\psi_{ref}}\cdot\left(e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}\right)\cdot e^{-\mathcal{R}_{d}}

Keeping the new variable LaTeX: e^{\mathcal{R}_{d}} we have the following constraint

LaTeX: e^{\mathcal{R}_{d}}=e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}+e^{\mathcal{R}_{b}}e^{\epsilon_{b}}

The denominator of LaTeX: R_{t} can be expressed as

LaTeX: e^{\mathcal{\eta}_{e}}=V_{ref}+e_{ref}-V_{t}+v_{off}

Note a sign change this is complimented in the denominator.

Note that due to the circuit physics LaTeX: V_{ref}>V_{x} for all errors

The expression for LaTeX: R_{t} is

LaTeX: e^{\mathcal{R}_{t}}=\left(\left(e^{\mathcal{V}_{t}}e^{\psi_{t}}\right)\left(e^{\epsilon_{com}}+e^{\epsilon_{b}}e^{\mathcal{R}_{b}}\right)+e^{\epsilon_{com}}e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}\right)e^{-\mathcal{\eta}_{e}}

With the constraint

LaTeX: e^{\mathcal{\eta}_{e}}=e^{\mathcal{V}_{t}}e^{\psi_{t}}+e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}

Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?

Abstract form of Likelihood case

Setup:

LaTeX: P\, a n-dimensional collection of Gaussian probability distribution functions. This is a condition I want to liberalize to any convex PDF (and in some sense to a uniform PDF).

LaTeX: p\in P\, ; LaTeX: \bar{p}\, PDF of LaTeX: p\,

General formula: LaTeX: y=f(x;p)\,

Calibration pair LaTeX: [y_{c},x_{c}]\, constraining LaTeX: p\,  : LaTeX: y_{c}=f(x_{c};p)\, and consequently forming a new PDF LaTeX: \bar{p'}\, with LaTeX: p'\, the constrained LaTeX: p\,.

LaTeX: y_{t}=f(x_{t};p')\,

Clearly given LaTeX: y_{t}\, fixed, LaTeX: x_{t}\, has a PDF. LaTeX: \bar{x_{t}}=\bar{x}_{t}\left(y_{t},y_{c},x_{c},\bar{p}\right)=\bar{x}_{t}\left(y_{t},\bar{p'}\right)\,

Problem 1: mode

maximize: The most likely value of LaTeX: \bar{x}_{t}\,

(i.e. LaTeX: \frac{dx_{t}}{dp'}=0\, for differentiable functions)

wrt : LaTeX: p'\,

given LaTeX: y_{t},y_{c},x_{c},\bar{x_{t}},y=f(x;p)\, or alternately LaTeX: y_t,\bar{x_{t}},y_t=f(x_t;p')\,

Personal tools