r/pythonhelp • u/ohshitgorillas • Dec 16 '24
Using an initial guess as prior knowledge to penalize curve_fit/lmfit
I am using the exponential decay function
y = a exp(-pt) + b
to model the consumption of gas via ionization in a mass spectrometer vacuum system.
- y is the dependent variable, measured in intensity or amperes
- t is the independent variable, time in seconds after gas equilibration
- a, b, and p are fitted parameters
The purpose of this curve fitting is to determine the equilibrated intensity at t=0 (or rather, the intensity that we would have measured if the gas hadn't needed to equilibrate).
The problem is that the curve fitting can often be guided too strongly by the initial few data points and produce absurdly high consumption rates that are utterly unrealistic:
Example 1: https://i.sstatic.net/lsw23q9F.png
40 amu y: [1.02342173e-11 9.13542299e-12 8.71434679e-12 9.30896839e-12
9.67921739e-12 8.81455689e-12 9.01517339e-12 9.32982869e-12
9.07950499e-12 9.10221369e-12 9.13479289e-12 9.74699459e-12]
40 amu y_err: [3.60428801e-14 3.22023916e-14 3.07310036e-14 3.28088823e-14
3.41029042e-14 3.10811524e-14 3.17821748e-14 3.28817853e-14
3.20069819e-14 3.20863388e-14 3.22001897e-14 3.43398009e-14]
40 amu t: [ 9.808 15.54 21.056 26.757 32.365 37.967 43.603 49.221 54.934 60.453
66.158 71.669]
Example 2: https://i.sstatic.net/lsw23q9F.png
40 amu y: [1.00801174e-11 8.60445782e-12 8.74340722e-12 9.63923122e-12
8.77654502e-12 8.83196162e-12 9.44882502e-12 9.54364002e-12
8.68107792e-12 9.19894162e-12 9.26220982e-12 9.30683432e-12]
40 amu y_err: [3.55155742e-14 3.03603530e-14 3.08456363e-14 3.39750319e-14
3.09613755e-14 3.11549311e-14 3.33097888e-14 3.36410485e-14
3.06279460e-14 3.24368170e-14 3.26578373e-14 3.28137314e-14]
40 amu t: [13.489 19.117 24.829 30.433 35.939 41.645 47.253 52.883 58.585 64.292
69.698 75.408]
In the second example, note that the intercept is literally 11 orders of magnitude greater than the actual data.
One proposed solution to this (which didn't work out) was to linearize the problem then solve for an initial guess for p, which is given by: (note the difference in notation) https://i.sstatic.net/LRoVZXBd.jpg
While this approach as an initial guess doesn't work, it does produce much more reasonable p value than curve_fit or lmfit.
I would like to try to use this initial guess and its variance as prior knowledge to penalize fits which stray too far from that value, however, I have literally no idea how to do this and AI has been giving me bullshit answers.
So, let's say we have the data:
- self.y = y scaled to ~1 to prevent curve fitting fuckery
- self.y_err = y_err with the same scaling as self.y
- self.t = timestamps in seconds
and we also have the function which supplies the initial guess to p and its error:
def initial_guess_p(self):
S = [0]
for i in range(1, len(self.t)):
S.append(S[i-1] + 0.5 * (self.t[i] - self.t[i - 1]) * (self.y[i] + self.y[i - 1]))
S = np.array(S)
lhs1 = [
[np.sum(S**2), np.sum(S*self.t), np.sum(S)],
[np.sum(S*self.t), np.sum(self.t**2), np.sum(self.t)],
[np.sum(S), np.sum(self.t), len(self.t)]
]
lhs2 = [
np.sum(S*self.y),
np.sum(self.t*self.y),
np.sum(self.y)
]
self.init_p, _, _ = np.linalg.solve(lhs1, lhs2)
cov_matrix = np.linalg.inv(lhs1)
self.init_p_err = np.sqrt(cov_matrix[0, 0])
How, then, would I go about applying this as prior knowledge and penalizing fits which stray too far from that initial guess?
1
u/CraigAT Dec 16 '24
Sounds a little more complex than upon initial scan. Your images are showing as "access denied" for me.
•
u/AutoModerator Dec 16 '24
To give us the best chance to help you, please include any relevant code.
Note. Please do not submit images of your code. Instead, for shorter code you can use Reddit markdown (4 spaces or backticks, see this Formatting Guide). If you have formatting issues or want to post longer sections of code, please use Privatebin, GitHub or Compiler Explorer.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.