Talk:Auto-zero/Auto-calibration

From Wikimization

(Difference between revisions)
Jump to: navigation, search
Line 6: Line 6:
I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing.
I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing.
Ray
Ray
 +
 +
== Trial: Poysnomial expressions ==
 +
 +
<math>e^{\psi_{x}}=\left(1-\frac{v_{off}}{V_{x}}\right),e^{\psi_{c}}=\left(1-\frac{v_{off}}{V_{c}}\right),e^{\psi_{t}}=\left(1-\frac{v_{off}}{V_{t}}\right),e^{\mathcal{V}_{x}}=V_{x},e^{\mathcal{V}_{c}}=V_{c},e^{\mathcal{V}_{t}}=V_{t}</math>
 +
 +
<math>e^{\mathcal{R}_{d}}=R_{x}+e_{com}+R_{b}+e_{b},e^{\mathcal{R}_{x}}=R_{x},e^{\epsilon_{com}}=e_{com},e^{\mathcal{R}_{b}}=R_{b},e^{\epsilon_{b}}=\left(1-\frac{e_{b}}{R_{t}}\right)</math>
 +
 +
<math>e^{\mathcal{R}_{t}}=R_{t}</math>
 +
 +
<math>e^{\mathcal{V}_{ref}}=V_{ref},e^{\psi_{ref}}=\left(1-\frac{v_{off}}{V_{ref}}\right)</math>
 +
 +
'''Thus the expression for <math>V_{x}</math> is'''
 +
 +
<math>e^{\mathcal{V}_{x}}e^{\psi_{x}}=e^{\mathcal{V}_{ref}}e^{\psi_{ref}}\cdot\left(e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}\right)\cdot e^{-\mathcal{R}_{d}}</math>
 +
 +
Keeping the new variable <math>e^{\mathcal{R}_{d}}</math> we have the following
 +
constraint
 +
 +
<math>e^{\mathcal{R}_{d}}=e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}+e^{\mathcal{R}_{b}}e^{\epsilon_{b}}</math>
 +
 +
The denominator of <math>R_{t}</math> can be expressed as
 +
 +
<math>e^{\mathcal{\eta}_{e}}=V_{ref}+e_{ref}-V_{t}+v_{off}</math>
 +
 +
Note a sign change this is complimented in the denominator.
 +
 +
Note that due to the circuit physics <math>V_{ref}>V_{x}</math> for all errors
 +
 +
'''The expression for <math>R_{t}</math> is'''
 +
 +
<math>e^{\mathcal{R}_{t}}=\left(\left(e^{\mathcal{V}_{t}}e^{\psi_{t}}\right)\left(e^{\epsilon_{com}}+e^{\epsilon_{b}}e^{\mathcal{R}_{b}}\right)+e^{\epsilon_{com}}e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}\right)e^{-\mathcal{\eta}_{e}}</math>
 +
 +
With the constraint
 +
 +
<math>e^{\mathcal{\eta}_{e}}=e^{\mathcal{V}_{t}}e^{\psi_{t}}+e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}</math>
 +
 +
Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?

Revision as of 10:12, 2 September 2010

I have been working on creating a robust design structure for the design of Auto-Zero/Auto-calibration implementations. I have a lot of moving parts in my head; but I believe I need outside viewpoints and knowledge in order to construct a general approach. If anybody is interested please respond here. It is a bit more complicated than it would seem on the surface IMHO. I somewhat think it falls within convex optimization. On the other hand I sometimes think it doesn't. I do have a particular example that illustrates the various problems that can arise. Although the ideas should be applicable to Scientific measurements; the applications I have in mind relate to autonomous embedded software and hardware implementations.

Ray

Note on the examples: I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing. Ray

Trial: Poysnomial expressions

LaTeX: e^{\psi_{x}}=\left(1-\frac{v_{off}}{V_{x}}\right),e^{\psi_{c}}=\left(1-\frac{v_{off}}{V_{c}}\right),e^{\psi_{t}}=\left(1-\frac{v_{off}}{V_{t}}\right),e^{\mathcal{V}_{x}}=V_{x},e^{\mathcal{V}_{c}}=V_{c},e^{\mathcal{V}_{t}}=V_{t}

LaTeX: e^{\mathcal{R}_{d}}=R_{x}+e_{com}+R_{b}+e_{b},e^{\mathcal{R}_{x}}=R_{x},e^{\epsilon_{com}}=e_{com},e^{\mathcal{R}_{b}}=R_{b},e^{\epsilon_{b}}=\left(1-\frac{e_{b}}{R_{t}}\right)

LaTeX: e^{\mathcal{R}_{t}}=R_{t}

LaTeX: e^{\mathcal{V}_{ref}}=V_{ref},e^{\psi_{ref}}=\left(1-\frac{v_{off}}{V_{ref}}\right)

Thus the expression for LaTeX: V_{x} is

LaTeX: e^{\mathcal{V}_{x}}e^{\psi_{x}}=e^{\mathcal{V}_{ref}}e^{\psi_{ref}}\cdot\left(e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}\right)\cdot e^{-\mathcal{R}_{d}}

Keeping the new variable LaTeX: e^{\mathcal{R}_{d}} we have the following constraint

LaTeX: e^{\mathcal{R}_{d}}=e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}+e^{\mathcal{R}_{b}}e^{\epsilon_{b}}

The denominator of LaTeX: R_{t} can be expressed as

LaTeX: e^{\mathcal{\eta}_{e}}=V_{ref}+e_{ref}-V_{t}+v_{off}

Note a sign change this is complimented in the denominator.

Note that due to the circuit physics LaTeX: V_{ref}>V_{x} for all errors

The expression for LaTeX: R_{t} is

LaTeX: e^{\mathcal{R}_{t}}=\left(\left(e^{\mathcal{V}_{t}}e^{\psi_{t}}\right)\left(e^{\epsilon_{com}}+e^{\epsilon_{b}}e^{\mathcal{R}_{b}}\right)+e^{\epsilon_{com}}e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}\right)e^{-\mathcal{\eta}_{e}}

With the constraint

LaTeX: e^{\mathcal{\eta}_{e}}=e^{\mathcal{V}_{t}}e^{\psi_{t}}+e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}

Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?

Personal tools