This paper presents theoretical studies on Asymptotic Convergence Rate (ACR)  for finite dimensional optimization. Given the problem function (fitness function), ACR measures how fast an iterative optimization method converges to the global solution as the number of iterations increases to infinity. If less than one, it determines  fast exponential  convergence mode ( known as linear convergence in various contexts). The presented theory extends the previous studies on  Average Convergence Rate, a related convergence rate measure. The main focus is on the question how the change of problem function  may influence the value of ACR and what is the relation  between  convergence rate in the objective space and in the search space.  It is shown, in particular,  that the ACR is the maximum of two components, one of which does not depend on the problem function. This provides the lower bound for convergence rate and implies that some  algorithms  cannot converge exponentially fast for any nontrivial continuous optimization problem. Furthermore, among other results, it is shown how the convergence rate in the search space is related to the convergence rate in the objective space if the problem function is dominated by some polynomial. We  discuss various examples and  numerical simulations with use of (1+1) self-adaptive evolution strategy and other algorithms.