Testing different values in a function R
up vote
0
down vote
favorite
Right now I am trying to fit spectra data with a Gaussian fit.
I want to build a loop to iterate through combinations of values in a "guesstimate" range for each of the peak positions, absorbances, and bandwidth until my R^2 value is minimized.
However, I have no idea how to structure my code. I was thinking of running it through a while loop until the R^2 value is below a certain threshold, but I don't know how to run through all the combinations of my variables...
Does anyone have any pointers on where to start?
Thank you!
This is what I'm working with:
peak1P <- seq(200, 300, 1) % Likely Peak Positions
peak2P <- seq(400, 550, 1)
peak3P <- seq(600,750, 1)
peak1A <- seq(0, 0.6, 1) % Likely Peak Absorbance
peak2A <- seq(0, 0.5, 1)
peak3A <- seq(0, 0.3, 1)
peak1B <- seq(0, 20, 1) # Likely Peak Bandwidth
peak2B <- seq(0, 20, 1)
peak3B <- seq(100,300, 1)
# Individual Peak Models
AbsG1 = peak1A*exp((-4*log(2))*(((((WvLnV/1500)-peak1P/1500)^2))/(peak1B/1500)))
AbsG2 = peak2A*exp((-4*log(2))*(((((WvLnV/1500)-peak2P/1500)^2))/(peak2B/1500)))
AbsG3 = peak3A*exp((-4*log(2))*(((((WvLnV/1500)-peak3P/1500)^2))/(peak3B/1500)))
# Final Fit to compare with Data
AbsF <- AbsG1 + AbsG2 + AbsG3
Ideally, my code would run through each combination of elements in peak(1-3)P, peak(1-3)A, and peak(1-3)B until the error is minimized (my function for calculating the error is not shown here)
r
add a comment |
up vote
0
down vote
favorite
Right now I am trying to fit spectra data with a Gaussian fit.
I want to build a loop to iterate through combinations of values in a "guesstimate" range for each of the peak positions, absorbances, and bandwidth until my R^2 value is minimized.
However, I have no idea how to structure my code. I was thinking of running it through a while loop until the R^2 value is below a certain threshold, but I don't know how to run through all the combinations of my variables...
Does anyone have any pointers on where to start?
Thank you!
This is what I'm working with:
peak1P <- seq(200, 300, 1) % Likely Peak Positions
peak2P <- seq(400, 550, 1)
peak3P <- seq(600,750, 1)
peak1A <- seq(0, 0.6, 1) % Likely Peak Absorbance
peak2A <- seq(0, 0.5, 1)
peak3A <- seq(0, 0.3, 1)
peak1B <- seq(0, 20, 1) # Likely Peak Bandwidth
peak2B <- seq(0, 20, 1)
peak3B <- seq(100,300, 1)
# Individual Peak Models
AbsG1 = peak1A*exp((-4*log(2))*(((((WvLnV/1500)-peak1P/1500)^2))/(peak1B/1500)))
AbsG2 = peak2A*exp((-4*log(2))*(((((WvLnV/1500)-peak2P/1500)^2))/(peak2B/1500)))
AbsG3 = peak3A*exp((-4*log(2))*(((((WvLnV/1500)-peak3P/1500)^2))/(peak3B/1500)))
# Final Fit to compare with Data
AbsF <- AbsG1 + AbsG2 + AbsG3
Ideally, my code would run through each combination of elements in peak(1-3)P, peak(1-3)A, and peak(1-3)B until the error is minimized (my function for calculating the error is not shown here)
r
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Right now I am trying to fit spectra data with a Gaussian fit.
I want to build a loop to iterate through combinations of values in a "guesstimate" range for each of the peak positions, absorbances, and bandwidth until my R^2 value is minimized.
However, I have no idea how to structure my code. I was thinking of running it through a while loop until the R^2 value is below a certain threshold, but I don't know how to run through all the combinations of my variables...
Does anyone have any pointers on where to start?
Thank you!
This is what I'm working with:
peak1P <- seq(200, 300, 1) % Likely Peak Positions
peak2P <- seq(400, 550, 1)
peak3P <- seq(600,750, 1)
peak1A <- seq(0, 0.6, 1) % Likely Peak Absorbance
peak2A <- seq(0, 0.5, 1)
peak3A <- seq(0, 0.3, 1)
peak1B <- seq(0, 20, 1) # Likely Peak Bandwidth
peak2B <- seq(0, 20, 1)
peak3B <- seq(100,300, 1)
# Individual Peak Models
AbsG1 = peak1A*exp((-4*log(2))*(((((WvLnV/1500)-peak1P/1500)^2))/(peak1B/1500)))
AbsG2 = peak2A*exp((-4*log(2))*(((((WvLnV/1500)-peak2P/1500)^2))/(peak2B/1500)))
AbsG3 = peak3A*exp((-4*log(2))*(((((WvLnV/1500)-peak3P/1500)^2))/(peak3B/1500)))
# Final Fit to compare with Data
AbsF <- AbsG1 + AbsG2 + AbsG3
Ideally, my code would run through each combination of elements in peak(1-3)P, peak(1-3)A, and peak(1-3)B until the error is minimized (my function for calculating the error is not shown here)
r
Right now I am trying to fit spectra data with a Gaussian fit.
I want to build a loop to iterate through combinations of values in a "guesstimate" range for each of the peak positions, absorbances, and bandwidth until my R^2 value is minimized.
However, I have no idea how to structure my code. I was thinking of running it through a while loop until the R^2 value is below a certain threshold, but I don't know how to run through all the combinations of my variables...
Does anyone have any pointers on where to start?
Thank you!
This is what I'm working with:
peak1P <- seq(200, 300, 1) % Likely Peak Positions
peak2P <- seq(400, 550, 1)
peak3P <- seq(600,750, 1)
peak1A <- seq(0, 0.6, 1) % Likely Peak Absorbance
peak2A <- seq(0, 0.5, 1)
peak3A <- seq(0, 0.3, 1)
peak1B <- seq(0, 20, 1) # Likely Peak Bandwidth
peak2B <- seq(0, 20, 1)
peak3B <- seq(100,300, 1)
# Individual Peak Models
AbsG1 = peak1A*exp((-4*log(2))*(((((WvLnV/1500)-peak1P/1500)^2))/(peak1B/1500)))
AbsG2 = peak2A*exp((-4*log(2))*(((((WvLnV/1500)-peak2P/1500)^2))/(peak2B/1500)))
AbsG3 = peak3A*exp((-4*log(2))*(((((WvLnV/1500)-peak3P/1500)^2))/(peak3B/1500)))
# Final Fit to compare with Data
AbsF <- AbsG1 + AbsG2 + AbsG3
Ideally, my code would run through each combination of elements in peak(1-3)P, peak(1-3)A, and peak(1-3)B until the error is minimized (my function for calculating the error is not shown here)
r
r
asked Nov 9 at 21:34
Lil Coder
84
84
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in
if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim()
or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim()
or credible intervals from Bayesian MCMC approaches)
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in
if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim()
or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim()
or credible intervals from Bayesian MCMC approaches)
add a comment |
up vote
1
down vote
this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in
if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim()
or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim()
or credible intervals from Bayesian MCMC approaches)
add a comment |
up vote
1
down vote
up vote
1
down vote
this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in
if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim()
or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim()
or credible intervals from Bayesian MCMC approaches)
this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in
if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim()
or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim()
or credible intervals from Bayesian MCMC approaches)
edited Nov 9 at 21:52
answered Nov 9 at 21:42
Sam Mason
2,1221224
2,1221224
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53233539%2ftesting-different-values-in-a-function-r%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown