Consumption-Saving model#
The consumption-saving model, which is initially quite simple, allows for the analysis of complex macroeconomic problems such as changes in fiscal policy, pension systems, migration processes, demographic changes, among others. Some applications:
Sustaining Fiscal Policy through Immigration (Storesletten, 2000).
Quantifying the Effects of the Demographic Transition in Developing Economies (Attanasio, Kita & Violante, 2006).
Global demographic trends and social security reform (Attanasio, Kitao & Violante, 2006)
Taxing Capital? Not a Bad Idea After All! (Conesa, Kitao & Krueger, 2009).
In this section of the course, we are going to solve the basic \(T\) periods model using different techniques:
Unconstraint minimization
Constraint minimization
Euler equation system
Shooting
Value Funtion Iteration
Then, you are going to test which is better. Later, we will code tax and pension applications.
In the consumption-saving model, there is a representative agent who lives for \(T\) periods. The agent must choose the optimal consumption \(\{c_t\}_{t=0}^T \). The preferences are represented by utility \(u(c_t)\) where:
The stationary discounting factor \(0<\beta<1\). The utility function \(u(c)\) satisfies the following properties:
\(u(c)\) is strictly increasing
\(u(c)\) is strictly concave
\(\lim_{c\rightarrow 0} u^{'}(c)=\infty\)
The initial amount of assets is given by \(a_0\) and the terminal condition \(a_{T+1}\ge 0\). The net interest rate earned on savings is \(r_t\). The budget constraint is
The dynamic optimization problem of the agent can be written as
Solve a T=3 model#
Unconstraint minimization#
from constraint
Then, the unconstraint problem
# import Pkg; Pkg.add("Optim")
using Optim, Parameters
function params(; β = 0.96,
R = 1.04,
y1 = 100,
y2 = 100,
y3 = 100,
ā = 150)
return (β=β, R=R, ā=ā, y1=y1, y2=y2, y3=y3)
end
# Función de utilidad, asumiendo utilidad CRRA
function u(c; γ=2.0)
return c^(1 - γ) / (1 - γ)
end
# Función objetivo
function objective(p, x)
@unpack β, R, ā, y1, y2, y3 = p
a2, a3 = x[1], x[2]
c1 = R * ā + y1 - a2
c2 = R * a2 + y2 - a3
c3 = R * a3 + y3
return -(u(c1) + β * u(c2) + β^2 * u(c3)) # Negativo porque Optim minimiza por defecto
end
objective (generic function with 1 method)
# Estableciendo un punto inicial para la optimización
initial_guess = [150., 150.]
p = params()
# Optimizando
result = optimize(x -> objective(p,x), initial_guess, BFGS())
# Resultados
optimal_a2, optimal_a3 = Optim.minimizer(result)
optimal_c1 = R * ā + y1 - optimal_a2
optimal_c2 = R * optimal_a2 + y2 - optimal_a3
optimal_c3 = R * optimal_a3 + y3
println("Optimal a2: ", optimal_a2)
println("Optimal a3: ", optimal_a3)
println("Optimal c1: ", optimal_c1)
println("Optimal c2: ", optimal_c2)
println("Optimal c3: ", optimal_c3)
UndefVarError: `R` not defined
Stacktrace:
[1] top-level scope
@ In[3]:10
Constraint minimization#
using JuMP, Ipopt
# Parámetros
β = 0.96 # Factor de descuento
R = 1.04 # Tasa de retorno de los activos
y = [100, 100, 100] # Ingresos en cada periodo, asumiendo 0 para simplificar
ā = 150 # Activo inicial
function u(c; γ=2.0)
return c^(1 - γ) / (1 - γ)
end
# Modelo de optimización
model = Model(Ipopt.Optimizer)
@variable(model, c[1:3] >= 0) # Consumo no negativo en cada periodo
@variable(model, a[1:4]) # Activos en cada periodo
# Función objetivo
@NLobjective(model, Max, u(c[1]) + β*u(c[2]) + β^2*u(c[3]))
# Restricciones
@constraint(model, a[1] == ā)
@constraint(model, a[2] == R * a[1] + y[1] - c[1])
@constraint(model, a[3] == R * a[2] + y[2] - c[2])
@constraint(model, a[4] == R * a[3] + y[3] - c[3])
@constraint(model, a[4] == 0) # Valor terminal de activos
# Resolver el modelo
optimize!(model)
# Imprimir la solución
println("Solución óptima:")
for t in 1:3
println("c[$t] = ", value(c[t]))
end
println("Valor de la función objetivo: ", objective_value(model))
┌ Warning: Function u automatically registered with 1 arguments.
│
│ Calling the function with a different number of arguments will result in an
│ error.
│
│ While you can safely ignore this warning, we recommend that you manually
│ register the function as follows:
│ ```Julia
│ model = Model()
│ register(model, :u, 1, u; autodiff = true)
│ ```
└ @ MathOptInterface.Nonlinear C:\Users\felix\.julia\packages\MathOptInterface\2rAFb\src\Nonlinear\operators.jl:430
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
This is Ipopt version 3.14.14, running with linear solver MUMPS 5.6.2.
Number of nonzeros in equality constraint Jacobian...: 11
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 7
variables with only lower bounds: 3
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 5
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -2.8816029e+02 1.50e+02 2.04e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 -1.8723695e-02 2.84e-14 1.60e+04 -1.0 1.60e+02 - 6.19e-05 1.00e+00f 1
2 -1.8722508e-02 1.42e-14 2.56e-04 -1.0 2.42e-01 - 9.97e-01 1.00e+00f 1
3 -1.8720830e-02 1.42e-14 2.83e-08 -2.5 2.92e-01 - 1.00e+00 1.00e+00f 1
4 -1.8717478e-02 1.42e-14 1.50e-09 -3.8 6.41e-01 - 1.00e+00 1.00e+00h 1
5 -1.8716806e-02 2.84e-14 1.84e-11 -5.7 1.39e-01 - 1.00e+00 1.00e+00h 1
6 -1.8716732e-02 2.84e-14 2.51e-14 -8.6 1.55e-02 - 1.00e+00 1.00e+00h 1
7 -1.8715671e-02 0.00e+00 2.51e-14 -12.9 2.29e-01 - 9.99e-01 1.00e+00h 1
8 -1.8705340e-02 2.84e-14 1.04e-15 -12.9 4.41e+00 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 8
(scaled) (unscaled)
Objective...............: 1.8705339649251498e-10 -1.8705339649251498e-02
Dual infeasibility......: 1.0430737292938402e-15 1.0430737292938403e-07
Constraint violation....: 2.8421709430404007e-14 2.8421709430404007e-14
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 3.2052476893332745e-13 3.2052476893332743e-05
Overall NLP error.......: 3.2052476893332745e-13 3.2052476893332743e-05
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 9
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 8
Total seconds in IPOPT = 0.240
EXIT: Optimal Solution Found.
Solución óptima:
c[1] = 154.27976827243637
c[2] = 154.08584266003513
c[3] = 153.77132627009632
Valor de la función objetivo: -0.018705339649251498
# Parámetros
β = 0.96 # Factor de descuento
R = 1.04 # Tasa de retorno de los activos
y = [100, 100, 100] # Ingresos en cada periodo, asumiendo 0 para simplificar
ā = 150 # Activo inicial
function u(c; γ=2.0)
return c^(1 - γ) / (1 - γ)
end
# Modelo de optimización
model = Model(Ipopt.Optimizer)
@variable(model, c[1:3] >= 0) # Consumo no negativo en cada periodo
@variable(model, a[2:3]) # Activos en cada periodo
# Función objetivo
@NLobjective(model, Max, u(c[1]) + β*u(c[2]) + β^2*u(c[3]))
# Restricciones
# @constraint(model, a[1] == ā)
@constraint(model, a[2] == R * ā + y[1] - c[1])
@constraint(model, a[3] == R * a[2] + y[2] - c[2])
@constraint(model, 0 == R * a[3] + y[3] - c[3])
# @constraint(model, a[4] == 0) # Valor terminal de activos
# Resolver el modelo
optimize!(model)
# Imprimir la solución
println("Solución óptima:")
for t in 1:3
println("c[$t] = ", value(c[t]))
end
┌ Warning: Function u automatically registered with 1 arguments.
│
│ Calling the function with a different number of arguments will result in an
│ error.
│
│ While you can safely ignore this warning, we recommend that you manually
│ register the function as follows:
│ ```Julia
│ model = Model()
│ register(model, :u, 1, u; autodiff = true)
│ ```
└ @ MathOptInterface.Nonlinear C:\Users\felix\.julia\packages\MathOptInterface\2rAFb\src\Nonlinear\operators.jl:430
This is Ipopt version 3.14.14, running with linear solver MUMPS 5.6.2.
Number of nonzeros in equality constraint Jacobian...: 7
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 5
variables with only lower bounds: 3
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 3
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -2.8816029e+02 2.56e+02 2.04e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 -1.8723695e-02 2.84e-14 1.60e+04 -1.0 1.60e+02 - 6.19e-05 1.00e+00f 1
2 -1.8722508e-02 0.00e+00 2.56e-04 -1.0 2.42e-01 - 9.97e-01 1.00e+00f 1
3 -1.8720830e-02 0.00e+00 2.83e-08 -2.5 2.92e-01 - 1.00e+00 1.00e+00f 1
4 -1.8717478e-02 0.00e+00 1.50e-09 -3.8 6.41e-01 - 1.00e+00 1.00e+00f 1
5 -1.8716806e-02 2.84e-14 1.84e-11 -5.7 1.39e-01 - 1.00e+00 1.00e+00f 1
6 -1.8716732e-02 0.00e+00 2.51e-14 -8.6 1.55e-02 - 1.00e+00 1.00e+00h 1
7 -1.8715671e-02 1.42e-14 2.51e-14 -12.9 2.29e-01 - 9.99e-01 1.00e+00f 1
8 -1.8705340e-02 1.42e-14 1.04e-15 -12.9 4.41e+00 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 8
(scaled) (unscaled)
Objective...............: 1.8705339649251511e-10 -1.8705339649251512e-02
Dual infeasibility......: 1.0430737298888593e-15 1.0430737298888593e-07
Constraint violation....: 1.4210854715202004e-14 1.4210854715202004e-14
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 3.2052476899665740e-13 3.2052476899665738e-05
Overall NLP error.......: 3.2052476899665740e-13 3.2052476899665738e-05
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 9
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 8
Total seconds in IPOPT = 0.039
EXIT: Optimal Solution Found.
Solución óptima:
c[1] = 154.27976827249418
c[2] = 154.08584266007117
c[3] = 153.77132626999628
Equation system using FOC#
Solving by consumption \(c_1\), \(c_2\) and \(c_3\).
function params(; β = 0.96,
R = 1.04,
y1 = 100,
y2 = 100,
y3 = 100,
ā = 150)
return (β=β, R=R, ā=ā, y1=y1, y2=y2, y3=y3)
end
using NLsolve
# Función de utilidad, asumiendo utilidad CRRA
function u(c; γ=2.0)
return c > 0 ? c^(1 - γ) / (1 - γ) : -Inf # Devuelve -Inf para consumos no válidos
end
# Derivada de la función de utilidad
function u_prime(c; γ=2.0)
return c^(-γ)
end
# Sistema de ecuaciones de Euler
function euler(p, x)
@unpack β, R, ā, y1, y2, y3 = p
c1, c2, c3 = x[1], x[2], x[3]
eq1 = u_prime(c1) - β * R * u_prime(c2)
eq2 = u_prime(c2) - β * R * u_prime(c3)
a2 = R * ā + y1 - c1
a3 = R * a2 + y2 - c2
a4 = R * a3 + y3 - c3 # Esto debe ser cero según el problema
eq3 = a4 # Condición terminal de que a4 debe ser 0
return [eq1, eq2, eq3]
end
initial_guess = [150.0, 150.0, 150.0]
result = nlsolve(x-> euler(p,x), initial_guess, autodiff = :forward) # Uso de diferenciación automática
Results of Nonlinear Solver Algorithm
* Algorithm: Trust-region with dogleg and autoscaling
* Starting Point: [150.0, 150.0, 150.0]
* Zero: [154.16289267711318, 154.04937167178804, 153.93566874177486]
* Inf-norm of residuals: 0.000000
* Iterations: 1
* Convergence: true
* |x - x'| < 0.0e+00: false
* |f(x)| < 1.0e-08: true
* Function Calls (f): 2
* Jacobian Calls (df/dx): 2
Equation system using FOC#
Other mehcanis is to use the Euler equation and solve by \(a_2\) and \(a_3\).
Using the Euler equations:
Using \(c_t\) and \(c_{t+1}\) in the Euler equations: $\(u^{'}(Ra_t + y_t - a_{t+1}) =\beta R u^{'}(Ra_{t+1} + y_{t+1} - a_{t+2}) \)$
Then, the problem is a second order difference equation \(\phi(a_t, a_{t+1}, a_{t+2})=0\). In a T=3 problem:
function params(; β = 0.96,
R = 1.04,
y1 = 100,
y2 = 100,
y3 = 100,
ā = 150)
return (β=β, R=R, ā=ā, y1=y1, y2=y2, y3=y3)
end
params (generic function with 1 method)
using NLsolve, Parameters
# Función de utilidad, asumiendo utilidad CRRA
function u(c; γ=2.0)
return c > 0 ? c^(1 - γ) / (1 - γ) : -Inf # Devuelve -Inf para consumos no válidos
end
# Derivada de la función de utilidad
function u_prime(c; γ=2.0)
return c^(-γ)
end
# Sistema de ecuaciones de Euler
function euler(p, x)
@unpack β, R, ā, y1, y2, y3 = p
a2, a3 = x[1], x[2]
eq1 = u_prime(R*ā + y1- a2) - β * R * u_prime(R*a2 + y2- a3) #phi(a1,a2,a3)
eq2 = u_prime(R*a2 + y2- a3) - β * R * u_prime(R*a3 + y3) #phi(a2,a3,a4)
return [eq1, eq2]
end
p = params()
initial_guess = [150.0, 150.0]
result = nlsolve(x-> euler(p,x), initial_guess, autodiff = :forward) # Uso de diferenciación automática
a2, a3 = result.zero
c1 = p.R*p.ā + p.y1 - a2
c2 = p.R*a2 + p.y2 - a3
c3 = p.R*a3 + p.y3
print("c1, c2, c3: ", (c1,c2,c3))
c1, c2, c3: (154.1784105404475, 154.0547359427045, 153.91330577903932)