Forums  > Basics  > Optimization of random function  
     
Page 1 of 1
Display using:  

nikol


Total Posts: 1124
Joined: Jun 2005
 
Posted: 2020-06-04 23:08
Suppose objective function is written for function f(x, par, N) as distance to measurememt points y.
f(.) is calculated with Monte carlo. N is number of trials. Number of calls to f() to find minima is M.

Error of function is eps and is driven by 1/N,
Error of objective function is defined by tolerance, tol.

Obviously,
By increasing N I increase precision of function f(). This allows to find minima with less calls to f(). Total number of trials will be roughly NxM. But is it is not optimal.

I can do the same faster if I change N dynamically. The algorithm can search wide region then gradually increase precision of f() to get better precision on parameters etc etc, till it converges below tol.

I looked into "Stochastic optimization" but could not find what I need. Maybe I just overlooked the obvious.

Thanks.

mgh95


Total Posts: 1
Joined: Jun 2020
 
Posted: 2020-06-05 04:25
You should look into the term "adaptive mesh refinement". There are adaptations to Bayesian inference in MCMC, but mostly in the non-parametric case. Not as popular in that area, but it is certainly a known method that may give you what you need.

nikol


Total Posts: 1124
Joined: Jun 2005
 
Posted: 2020-06-05 08:55
You put me on the right track ("adaptive" is the key).
Thank you!

ronin


Total Posts: 585
Joined: May 2006
 
Posted: 2020-06-05 16:06
Stuart Geman's paper Diffusions for Global Optimization is the sort of classical text.

This one.

Basically, the randomness effectively smooths out the objective function and it makes it easier to not get stuck in local minima. But it costs you precision. You have to find the right balance.

"There is a SIX am?" -- Arthur
Previous Thread :: Next Thread 
Page 1 of 1