Introduction

The Simulated Annealing technique is a very popular way of optimizing model parameters. This method is based on Physical Annealing in reality. The process through which a material is heated till the annealing temperature and then cooled down for the desired structure formation is called Physical Annealing. Simulated Annealing is based on this technique, and it copies physical annealing for optimizing parameters.

In this article let us look at:

  1. Simulated Annealing
  2. Implement Simulated Annealing
  3. Stopping Criteria of Simulated annealing
  4. Simulated Annealing Worked Example
  5. Simulated annealing vs hill-climbing methods

1. Simulated Annealing

A Simulated annealing algorithm is a method to solve bound-constrained and unconstrained optimization parameters models. The method is based on physical annealing and is used to minimize system energy.

In every simulated annealing example, a random new point is generated. The distance between the current point and the new point has a basis of the probability distribution on the scale of the proportion of temperature. The algorithm aims at all those points that minimize the objective with certain constraints and probabilities. Those points that raise the objective are also accepted to explore all the possible solutions instead of concentrating only on local minima.

Optimization by simulated annealing is performed by systematically decreasing the temperature and minimising the search’s extent. 

2. Implement Simulated Annealing

There are a set of steps that are performed for simulated annealing in ai. These steps can be summarized as follows:

  • Simulated annealing creates a trial point randomly. The algorithm selects the distance between the current point and the trial point by a probability distribution. The scale of such distribution is temperature. With the annealing function trial, point distribution distance is set. To keep the boundaries intact, the trial point is shifted gradually.
  • The Simulated Annealing formula then determines if the new point is better than the older or not. If the new point is better, it is made as a next point, while if the new point is worse, it can still be accepted depending upon the simulated annealing acceptance function. 
  • A systematic algorithm gradually reduces the temperature selecting the best point that gets generated in the process.
  • For lowering the values, the annealing parameters are set, raising and reducing the temperatures. The simulated annealing parameters are based on the values of the probable gradients of every dimension of the objective.
  • The simulated annealing is concluded when it reaches the lowest minima or any of the specific stopping criteria.

3. Stopping Criteria of Simulated annealing

Some of the conditions that are considered as the basis to stop the simulated-annealing are as follows:

  • The simulated-annealing performs until the value of the objective function goes lesser than the tolerance function. The value of default is 1e – 6
  • The default value of iterations in simulated-annealing is INF. This can be set to any positive integer as well. When the algorithm exceeds the iteration value, it stops.
  • The annealing concludes when the maximum number of evaluations is achieved. The default value of such evaluations is 3000 * number of variables.
  • The default value of maximum time is Inf, and when that is reached, the algorithm stops.
  • When the best objective function value is lesser than the limit of the objective it concludes. The default value of such an objective function is -Inf.

4. Simulated Annealing Worked Example

To understand how simulated-annealing works, one can take the example of a traveling salesman. The solution can be created by applying any of the language selections. Let us understand the problem and the solution with simulated-annealing applications.

  • At the onset, a city class needs to be created to specify several destinations the travelling salesman would visit.
  • After that, a class has to be created that keeps track of the cities.
  • Then a class is created that models the tour of the travelling salesman.
  • With all the different classes and the information in hand, a simulated-annealing algorithm is created.
  • Thus with the types of optimization problems, a relatively simpler algorithm is created, and the solution is sought.

5. Simulated annealing vs hill-climbing methods

There is a huge difference between hill-climbing and simulated-annealing considering the way they are applied, and the results are achieved. Simulated-annealing is believed to be a modification or an advanced version of hill-climbing methods. Hill climbing achieves optimum value by tracking the current state of the neighborhood. Simulated-annealing achieves the objective by selecting the bad move once a while. A global optimum solution is guaranteed with simulated-annealing, while such a guarantee is not assured with hill climbing or descent.

Conclusion

Simulated annealing definitely poses an upper hand on methods such as hill climbing. While descent gets sometimes stuck with local optimums, annealing achieves global optimum. A hill climber normally accepts solutions when the neighbour solution is better than the current point. However, this is not the case with annealing. It also accepts the worse solution once in a while to jump out of the local optimum.

There are no right or wrong ways of learning AI and ML technologies – the more, the better! These valuable resources can be the starting point for your journey on how to learn Artificial Intelligence and Machine Learning. Do pursuing AI and ML interest you? If you want to step into the world of emerging tech, you can accelerate your career with this Machine Learning And AI Courses by Jigsaw Academy.

ALSO READ

SHARE
share

Are you ready to build your own career?