diff --git a/d_evans_parallel_computing.ipynb b/d_evans_parallel_computing.ipynb new file mode 100644 index 0000000..9012e7f --- /dev/null +++ b/d_evans_parallel_computing.ipynb @@ -0,0 +1,1481 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Parallel Computing for Economics with Python" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Types of Parallel Computing" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When boiled down to its basics there are two types of parallel computing environments: Shared Memory Multiple Processors (SMMP) and Distributed Memory Multiple Processors. SMMP will be the parallel environment most people are familiar with. Nearly all computers today have two or more processors sharing the same memory. Languages like C,C++ and Fortran can use OpenMP to quickly parallelize for loops in and SMMP environment. In addition many numpy operations such as `.dot` are compiled to take advantage of parallel code in the BLAS matrix libraries. Shared memory environments are generally easier to write parallel code for: in many cases it can be as simple as adding `#parfor` to a for loop. There is a limit, however to the number of processors that can share the same memory and eventually we have to move on to a distributed memory environment." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a DMMP environment each computational node has its own memory and the nodes are linked together through a network. As the memory for the nodes are separate, they need to communicate periodically to work on a joint project. There are multiple ways to handle this communication, for this lecture we will focus on two: MPI which can be accessed through the python library `mpi4py` and IPython parallel. At the heart of the python language is the GIL, or Global Interpreter Lock, this allows only one thread to access the memory of the python program at a point in time. This effectively requires python code to be parallelized using DMMP techniques. This may seem like a disadvantage, but in reality most economic applications are so easily parallelized that the overhead of using MPI is negligible." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## mpi4py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will begin with an overview of how to use mpi4py to produce parallel code. MPI itself has a rich set of commands to govern the communications between processes. We will be focussing on a simple subset of these, but they should be sufficient for almost all economic applications. The best way to think about parallel code using mpi4py in this setup is $N$ processes running the same python script independently of each other which occasionally communicate with each other via commands from mpi4py." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When they communicate it is necessary for the processes to distinguish one from another. Thus, each is assigned its own rank. If an MPI command launches 8 processes then the ranks of these processes will run from 0 to 7. This will allow us to have the script execute different commands for each process. To illustrate this let's consider the following simple program:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hello world from process 0 of 1\n" + ] + } + ], + "source": [ + "from mpi4py import MPI\n", + "\n", + "comm = MPI.COMM_WORLD #retrieve the communicator module\n", + "rank = comm.Get_rank() #get the rank of the process\n", + "size = comm.Get_size() #get the number of processes\n", + "\n", + "print(\"Hello world from process \"+str(rank)+\" of \" + str(size))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this case the code was launched from a single application so the number of processes was one. To launch more than one process running the same python script we need to launch them using the command `mpiexec`. For example, if the code above was stored in `mpi_helloworld` to launch 8 processes we would run the following command in the terminal. (All code files used in this tutorial can be produced by jumping to the [.py files appendix](#Appendix:-.py-files-used) and running all its cells)" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hello world from process 1 of 8\r\n", + "Hello world from process 6 of 8\r\n", + "Hello world from process 2 of 8\r\n", + "Hello world from process 5 of 8\r\n", + "Hello world from process 7 of 8\r\n", + "Hello world from process 4 of 8\r\n", + "Hello world from process 3 of 8\r\n", + "Hello world from process 0 of 8\r\n" + ] + } + ], + "source": [ + "!mpiexec -n 8 python ./mpi/mpi_helloworld.py " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note the ! is only necessary to launch the shell command using IPython." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The `mpi4pi` Toolkit" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Handling communication between processes is the most important part of parallel programming. There are a multitude of commands in mpi4py to do this but we will focus on 3: `gather`, `allgather` and `bcast`. These will allow us to confront almost all of the problems we face in Economics." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### `gather` and `allgather`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We use `gather` if there is data on all processes which we want to pull together into a single process. For example, if you have a program that solves an optimization problem using value function iteration, each process can solve the optimization problem on a separate chunk of the state space. It would then be possible to gather the results for interpolation before continuing on to the next stage of the iteration. For the process of rank 0 `gather` returns a list with the data from each process ordered by the rank of the process. For the rest of the processes `gather` returns a `None`." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Another natural application of gather is Monte Carlo integration. In this simple example we compute the expected value random variable `cos(x)` when x is distributed according to a standard normal distribution. The code is in `MPI_montecarlo.py`." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0.598340452339\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "N = 10000\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "x = np.random.randn(N)\n", + "my_result = np.cos(x).mean() #my average\n", + "\n", + "#now average accross all processes\n", + "results = comm.gather(my_result)\n", + "if rank == 0: #only rank =0 process has the results\n", + " print(np.mean(results))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The result above was computed with a single process. We can run it with multiple processes via `mpiexec`." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0.604594549311\r\n" + ] + } + ], + "source": [ + "!mpiexec -n 8 python ./mpi/MPI_montecarlo.py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`allgather` performs a very similar function to gather. The difference is in which processes receive the data. In the above example the process with rank 0 has an object `results` which contains a list of the result of the sum for each process. For every other process `results` is a None. If `allgather` were used in place of `gather` the results object would be the same in all processes. Consider the following variant of our Monte Carlo example" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gather result for process 0: [0.61107664551445218]\n", + "Allgather results for process 0: [0.61107664551445218]\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "N = 10000\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "x = np.random.randn(N)\n", + "my_result = np.cos(x).mean() #my average\n", + "\n", + "#now gather from all processes\n", + "results = comm.gather(my_result)\n", + "\n", + "print(\"Gather result for process \" + str(rank) +\":\",results)\n", + "\n", + "#now allgather from all processes\n", + "results = comm.allgather(my_result)\n", + "\n", + "print(\"Allgather results for process \" + str(rank) + \":\",results)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we run this for multiple processes we get the following results:" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Gather result for process 0: [0.61226338708282091, 0.60707124272976165, 0.6074580403169666, 0.60017384225078951]\r\n", + "Allgather resuts for process 0: [0.61226338708282091, 0.60707124272976165, 0.6074580403169666, 0.60017384225078951]\r\n", + "Gather result for process 1: None\r\n", + "Allgather resuts for process 1: [0.61226338708282091, 0.60707124272976165, 0.6074580403169666, 0.60017384225078951]\r\n", + "Gather result for process 2: None\r\n", + "Allgather resuts for process 2: [0.61226338708282091, 0.60707124272976165, 0.6074580403169666, 0.60017384225078951]\r\n", + "Gather result for process 3: None\r\n", + "Allgather resuts for process 3: [0.61226338708282091, 0.60707124272976165, 0.6074580403169666, 0.60017384225078951]\r\n" + ] + } + ], + "source": [ + "!mpiexec -n 4 python ./mpi/MPI_montecarlo_allgather.py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As expected with gather only the process with rank 0 has the list containing the Monte Carlo average for each process. With `allgather` each process has the same list. As all of the processes are running simultaneously the printed results may be jumbled, but the code will always print the `gather` results before the `allgather` results. The reason for this is that both `allgather` and `gather` are blocking. Once a process has reached the `gather` command it will not continue until all other processes have reached that command as well. This ensures that your code will remained synced throughout execution.\n", + "\n", + "There is one caveat. On some systems I have found `allgather` may hang if there are too many processes. If this occurs it may be necessary to replace and `allgather` command with a combination of `gather` and the `bcast` command discussed below or the numpy versions of these commands." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## bcast" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One can think of `bcast` as the reverse of `gather`. Instead of gathering data at the root process (the process with rank 0), `bcast` takes data on the root process and sends it to all the other processes. Take this simple code:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "if rank == 0:\n", + " A = np.random.rand(5,5)\n", + "else:\n", + " A = None\n", + " \n", + "B = comm.bcast(A)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This code creates a `5x5` matrix on the process with rank 0. It then broadcasts that matrix to all the other processes and stores it in the variable `B`. Note, that `A` still holds none for the processes with rank other than 0." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### What can be sent?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A natural question at this point is what kind of python objects can be sent between processes with `mpi4py`. It turns out that nearly all python objects can be sent via `mpi4py`. The way that an object is sent via mpi4py is that it is first pickled, which converts the python object to a string. The string is a then sent to the other processes which then convert it back into its original form. This makes `mpi4py` very flexible. Nearly all objects in python are picklable, including most user defined classes, which means that most objects can be communicated from one object to another. In some cases, such as classes defined with Cython, it may be necessary to tell python how to pickle and unpickle an object, but these are generally outside normal usage.\n", + "\n", + "This flexibility does have a drawback. Python must convert the object to a string and then back again, which can make sending an object slow. If your program relies on frequent communication between processes it may be worth it to use versions of `gather` and `allgather` designed for numpy arrays. When these are used Python no longer pickles the object and instead shares the numpy array data directly. We discuss these in the next section." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Numpy Versions" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All of the `mpi4py` commands we have discussed have corresponding numpy versions. You can recognize these commands because they begin with a capital letter. For example, the numpy version of `allgather` is `Allgather`. The usage of these commands is also slightly different. To see how this works let's consider the following example:" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 0. 0.]\n", + " [ 0. 0.]]\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "size = comm.Get_size()\n", + "\n", + "# create a 2x2 array filled with the rank of the process\n", + "x = np.ones((2,2)) * rank \n", + "\n", + "# create an empty array to store the gathered result\n", + "X = np.empty((2*size,2))\n", + "\n", + "#use numpy Allgather\n", + " #send buffer #recv buffer\n", + "comm.Allgather([x, MPI.DOUBLE],[X, MPI.DOUBLE])\n", + "\n", + "if rank ==0:\n", + " print(X)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we run this command with 4 processes it returns:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 0. 0.]\r\n", + " [ 0. 0.]\r\n", + " [ 1. 1.]\r\n", + " [ 1. 1.]\r\n", + " [ 2. 2.]\r\n", + " [ 2. 2.]\r\n", + " [ 3. 3.]\r\n", + " [ 3. 3.]]\r\n" + ] + } + ], + "source": [ + "!mpiexec -n 4 python ./mpi/MPI_Allgather.py" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see some immediate differences between the usage of `allgather` and `Allgather`. The first is that `Allgather` gives us a numpy array rather than a list. A good rule of thumb is that `Allgather` will return the equivalent of `np.vstack(comm.Allgather(x))`. We also must preallocate space for that array. `Allgather` takes two inputs. `[x, MPI.DOUBLE]` is the send buffer where we specify the numpy array we wish to send and the type of data stored in it. `[X, MPI.DOUBLE]` is the numpy array where we wish to store the result and the type of data it holds. `Allgather` and `Gather` work best when all of the data spread across the different processes have the same shape otherwise it will also be necessary to the length of each vector as well." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## An Economic Example" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now use `mpi4py` to solve a simple economics problem in parallel. For this example we will study a simple consumption savings model with incomplete markets. Consider an agent how faces a random endowment stream $y_t = exp(z_t)$ which is AR(1) in logs\n", + "$$\n", + " z_t = \\rho z_{t-1} + \\epsilon_t\n", + "$$ The agent cannot perfectly insure against this random endowment and can only save through a risk free bond with interest rate $r$. The budget constraint of the agent is then\n", + "$$\n", + " (1+r)a_{t-1} + y_t = c_t + a_t. \n", + "$$ Additionally agents face an ad-hoc borrowing constraint $a_t \\geq \\underline a$ for all $t\\geq0$.\n", + "\n", + "The problem can be written recursively as follows. Let $V(a,z)$ be the value of an agent who enters the period with assets $a$ and endowment $\\exp(z)$. $V(a,z)$ then solves the following bellman equation\n", + "$$\n", + " V(a,z) = \\max_{a' \\geq \\underline a} U( (1+r)a + \\exp(z) - a' ) + \\beta \\mathbb E_z V(a',z')\n", + "$$ Define the mapping $T$ associated with the bellman equation as \n", + "$$\n", + " T(V)(a,z) = \\max_{a' \\geq \\underline a} U( (1+r)a + \\exp(z) - a' ) + \\beta \\mathbb E_z V(a',z')\n", + "$$the solution to this bellman equation is the fixed point of this mapping.\n", + "\n", + "The python modules `bellman.py`, `primitives.py` and `Main.py` are written to solve this problem, see the [appendix](#Appendix:-.py-files-used) for the code. `primitives.py` defines the parameters that are the primitives of the problem and may easily be adapted for different types of preferences. For example we can can create a Para object holding the parameters of our problem as via:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from mpi.primitives import primitives_CRRA\n", + "Para = primitives_CRRA()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`bellman.py` defines the Bellman Map class. This can be constructed from a parameter class and an interest rate $r$:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import mpi.bellman as bellman\n", + "r = 0.9995*(1./Para.beta - 1)\n", + "T = bellman.BellmanMap(Para,r)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`T` here corresponds exactly to the $T$ defined in the equations above. For a given continuation value function $V$, `T(V)` is a function that computes the maximum of the agents' utility function. For example if we define $V_0$ below:" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "-19.0776632874 -7.95513729592\n" + ] + } + ], + "source": [ + "from numpy import *\n", + "V0 = lambda a,z : Para.U(exp(z)+r*a)/(1-Para.beta)\n", + "print(T(V0)([1.,0.]), T(V0)([1.,1.]))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It will print out the value after one iteration for an agent with $(a,z) = (1,0)$ and $(1,1)$, respectively. All of this code is written just as it would be for serial execution and we won't go into detail into the precise nature of the code. Instead, we will focus on how to parallelize the solution of the Bellman Equation. The key step is approximating the current value function $T(V)$ via interpolation. We proceed by creating a grid over the $(a,z)$ space (with more points placed where the value function has greater curvature." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "data": { + "text/plain": [ + "[]" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX8AAAEACAYAAABbMHZzAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAGSBJREFUeJzt3X+M5Hd93/Hna5l61cEKSmg4qA2GCyEkVjJjs10uca83\nTVo4X+527ygidkUJSAWEuhskospOQOzyRyXoH61M3Iq4dfiluLtkV8FnAqovsqeISoGLbw/bsMeY\nBpyza18bpU4LQSt77t0/5utlb2Z3b7/zne/Od+b7ekhfMd/vfO/1/Xx2zXtnPt8fH0UEZmZWLhPD\nboCZme0/F38zsxJy8TczKyEXfzOzEnLxNzMrIRd/M7MSylz8JV0v6SFJ35L0mKTf2mG/T0p6QtJ5\nSfWsxzUzs/5VBpDxAvChiDgv6VrgEUkPRsSFF3eQdCvwMxHxs5LeDHwKODSAY5uZWR8yf/KPiGcj\n4nzy+gfAOnBd126zwOeSfb4OvEzSgazHNjOz/gx0zF/Sa4E68PWut64DLm5Zf5rePxBmZrZPBlb8\nkyGfFeCDyTcAMzMrqEGM+SOpQqfwfz4i7t9ml6eBV29Zvz7Ztl2WHzZkZpZSRCjN/oP65P8HwLcj\n4q4d3j8NvAtA0iHguYi4tHPcMeAU0AYiWdpMTp6kVpvr2V6vz9Nut4mIvpd2u5OTb/ZHR7Tde8te\nWFgYyXbvdXmxf6PY9r1kp/n9FandaX9/o9buvWX3IUsDIgLglqQF54E14BxwFHg/8L4t+90NfBf4\nJnDzLnkB0wG/EjATsBSwFJXK8bjvvvvj3LnHo16fj2p1JarVlajV5uLcucdjEPLOPnBgeiTbvdfs\nhYWF3LLTyiN7a/9Gre17yU77+0uTndUgsnfqX9HbvZfsTilPV7sVUaxRFkkxMzOzuf6Sl7yEmZkZ\n3vnOd1KpdEapLl++zNraGgA33XQTExODO2+dZ/bCwgIv9m2U2r3X7MXFRRYXF3PJ7segs7v7N0pt\n30t2P7+/vWYPQtbs3fpX5HbvJXtqaopIOexTyOJftDYNSrPZpNFoDLsZuXH/Rpv7N7okufibmZVN\nP8Xfz/YxMyshF38zsxJy8TczKyEXfzOzEnLxNzMrIRd/M7MScvE3MyshF38zsxJy8TczKyEXfzOz\nEnLxNzMrIRd/M7MScvE3MyshF38zsxIayBy+gzY7O7v5epwmc3H2+GTnne9sZ6fNTi3t1F95L/RM\n47gcsByVyomuaRxXo1pdjXp9Podp0Zzt7OHlO9vZabPpYxrHoRf7ngZBwLGAUwHtgEiWdkxOnoxa\nba5ne70+H+12O9MPst3u5Djb2ePadmePc3b64j+Q7x+S7pV0SdKjO7x/RNJzks4ly0d2TzwI3MaV\npyQm2NiYZn39cM/2VutI/199Emtra7RaDWc7e6j5znZ2/9npDGrM/9PA7wGf22Wfr0bEzC7vm5nZ\nfkn7VWGnBbgBeHSH944AD+wxx8M+zi5s9ii33dnjnJ1+2Gc/i/9fAeeBPwF+YZecuPKE71LAUlQq\nx7tO+K5EtboStdpcDidQnO3s4eU729lps/sp/opOwc1M0g3Jp/tf2ua9a4HLEfG3km4F7oqIN+yQ\nEzMzPx4d8qWezi5idt75znZ2muypqSkiQmn+7b4U/232/R7wpoj4623ei4WFhc31RqNBo9EYSBvN\nzMZBs9mk2Wxurn/sYx8bavF/LZ3i/4vbvHcgIi4lr6eBL0TEa3fIiUG1ycysDCSlLv4DudpH0n1A\nA3i5pL8EFoBr6IxD3QO8XdIHgOeBHwG/MYjjmplZfwb2yX9Q/MnfzCydfj75+8FuZmYl5OJvZlZC\nLv5mZiXk4m9mVkIu/mZmJeTib2ZWQi7+ZmYl5OJvZlZCLv5mZiXk4m9mVkIu/mZmJeTib2ZWQi7+\nZmYlNKgJ3AdqdnZ287Vn8nJ2EbPzzne2s9Nmp5Z23se8F3rm8F0OWI5K5UTXHL6rUa2uRr0+n8Oc\nmM529vDyne3stNkMcwL3QS2d4n8s4FTPrPeTkyejVpvr2V6vz0e73c70g2y3OznOdva4tt3Z45yd\nvvgXdMz/IHAbV56SmGBjY5r19cM921utI/1/9Umsra3RajWc7eyh5jvb2f1np1PQ4m9mZrlK+1Uh\n7wUP+zi7wNmj3HZnj3N2+mGfQRXse4FLwKO77PNJ4AngPFDfZb+48oTvUsBSVCrHu074rkS1uhK1\n2lwOJ1Cc7ezh5Tvb2Wmz+yn+A5nAXdI/BH4AfC4ifmmb928F5iLi1yW9GbgrIg7tkBUzMzOb677U\n09lFzM4739nOTpM9NTVFpJzAfSDFH0DSDcADOxT/TwEPR8Rysr4ONCLi0jb7xqDaZGZWBpJSF//9\nOuF7HXBxy/rTyTYzMxuCQt7hu7i4uPm60WjQaDSG1hYzs6JpNps0m81MGcMa9rkAHPGwj5lZdsMe\n9lGybOc08C4ASYeA57Yr/GZmtj8GMuwj6T6gAbxc0l8CC8A1dC4/uicivizpmKTvAj8E3jOI45qZ\nWX8GNuwzKB72MTNLZ9jDPmZmNiJc/M3MSsjF38yshFz8zcxKyMXfzKyEXPzNzErIxd/MrIRc/M3M\nSsjF38yshFz8zcxKqJCPdJ6dnd187Zm8nF3E7Lzzne3stNmppZ33Me+Fnjl8lwOWo1I50TWH72pU\nq6tRr8/nMCems509vHxnOzttNsOawH2QS6f4Hws41TPr/eTkyajV5nq21+vz0W63M/0g2+1OjrOd\nPa5td/Y4Z6cv/gUd8z8I3MaVpyQm2NiYZn39cM/2VutI/199Emtra7RaDWc7e6j5znZ2/9npFLT4\nm5lZrtJ+Vch7wcM+zi5w9ii33dnjnJ1+2Gfoxb6nQT0nfJcClqJSOd51wnclqtWVqNXmcjiB4mxn\nDy/f2c5Om91P8S/kTF4zMzOb677U09lFzM4739nOTpM9NTVFpJzJq5DFv2htMjMrsqFN4yjpqKQL\nklqS7tjm/SOSnpN0Llk+MojjmplZfzLf4StpArgb+DXgfwJnJd0fERe6dv1qRMz0BJiZ2b4bxCf/\naeCJiHgyIp4HloDZbfZL9ZXEzMzyM4jifx1wccv6U8m2br8s6bykP5H0CwM4rpmZ9Wm/Huz2CPCa\niPhbSbcCXwTesNPOi4uLm68bjQaNRiPv9pmZjYxms0mz2cyUkflqH0mHgMWIOJqs30nnmtNP7PJv\nvge8KSL+epv3fLWPmVkKw7ra5yzwekk3SLqGzkN5Tnc17MCW19N0/uj0FH4zM9sfmYd9IqItaQ54\nkM4fk3sjYl3S+ztvxz3A2yV9AHge+BHwG1mPa2Zm/fNNXmZmI25oN3mZmdlocfE3MyshF38zsxJy\n8TczKyEXfzOzEnLxNzMrIRd/M7MS2q9n+6QyO/vjh4J6Ji9nFzE773xnOzttdmpp533Me6FnDt/l\ngOWoVE50zeG7GtXqatTr8znMielsZw8v39nOTpvN+EzgfizgVM+s95OTJ6NWm+vZXq/PR7vdzvSD\nbLc7Oc529ri23dnjnJ2++Bd0zP8gnefDbW3eBBsb06yvH+7Z3mod6f+rT2JtbY1Wq+FsZw8139nO\n7j87nYIWfzMzy1Xarwp5L3jYx9kFzh7ltjt7nLPTD/sMvdj3NKjnhO9SwFJUKse7TviuRLW6ErXa\nXA4nUJzt7OHlO9vZabP7Kf6FfKTzzMzM5rov9XR2EbPzzne2s9NkT01NESkf6VzI4l+0NpmZFZmf\n529mZnvi4m9mVkIu/mZmJTSQ4i/pqKQLklqS7thhn09KekLSeUn1QRzXzMz6k7n4S5oA7gbeCtwI\n3C7pjV373Ar8TET8LPB+4FNZj2tmZv0bxCf/aeCJiHgyIp4HloDZrn1mgc8BRMTXgZdJOjCAY5uZ\nWR8GUfyvAy5uWX8q2bbbPk9vs4+Zme2TQj7Pf3FxcfN1o9Gg0WgMrS1mZkXTbDZpNpuZMjLf5CXp\nELAYEUeT9Tvp3Gr8iS37fAp4OCKWk/ULwJGIuLRNnm/yMjNLYVg3eZ0FXi/pBknX0HkW8+mufU4D\n70oaeQh4brvCb2Zm+yPzsE9EtCXNAQ/S+WNyb0SsS3p/5+24JyK+LOmYpO8CPwTek/W4ZmbWPz/b\nx8xsxPnZPmZmticu/mZmJeTib2ZWQi7+ZmYlVMibvGZnf/x0CM/k5ewiZued72xnp81OLe28j3kv\n9MzhuxywHJXKia45fFejWl2Nen0+hzkxne3s4eU729lpsxmfCdyPBZzqmfV+cvJk1GpzPdvr9flo\nt9uZfpDtdifH2c4e17Y7e5yz0xf/go75H6Rzo/DW5k2wsTHN+vrhnu2t1pH+v/ok1tbWaLUaznb2\nUPOd7ez+s9MpaPE3M7Ncpf2qkPeCh32cXeDsUW67s8c5O/2wz9CLfU+Dek74LgUsRaVyvOuE70pU\nqytRq83lcALF2c4eXr6znZ02u5/iX8hn+8zMzGyu+1JPZxcxO+98Zzs7TfbU1BSR8tk+hSz+RWuT\nmVmR+cFuZma2Jy7+ZmYl5OJvZlZCLv5mZiXk4m9mVkIu/mZmJZTpkc6SfhJYBm4Avg+8IyL+Zpv9\nvg/8DXAZeD4iprMc18zMssn6yf9O4E8j4ueAh4Df2WG/y0AjIm5y4TczG76sxX8W+Gzy+rPAyR32\n0wCOZWZmA5K1IL8iIi4BRMSzwCt22C+AM5LOSnpvxmOamVlGVx3zl3QGOLB1E51i/pFtdt/puQy3\nRMQzkn6azh+B9Yj42k7HXFxc3HzdaDRoNBpXa6aZWWk0m02azWamjEzP9pG0Tmcs/5KkVwIPR8TP\nX+XfLAD/LyL+3Q7v+9k+ZmYpDOPZPqeBdyevfxO4f5tGVSVdm7x+KfAW4PGMxzUzswyyfvL/KeAL\nwKuBJ+lc6vmcpFcB/ykijkt6HfDHdIaEKsAfRsTHd8n0J38zsxT6+eTvRzqbmY24fop/ppu88jI7\nO7v52pO5OLuI2XnnO9vZabNTSzv1V94LPdM4LgcsR6Vyomsax9WoVlejXp/PYVo0Zzt7ePnOdnba\nbMZnDl9P4O7sYmaPctudPc7Z6Yt/Qe+6PQjcxpUXI02wsTHN+vrhnu2t1pH+v/ok1tbWaLUaznb2\nUPOd7ez+s9MpaPE3M7Ncpf2qkPeCh32cXeDsUW67s8c5O/2wz9CLfU+Dek74LgUsRaVyvOuE70pU\nqytRq83lcALF2c4eXr6znZ02u5/iX8jr/GdmZjbXfamns4uYnXe+s52dJntqaorwTV5mZuUyjGf7\nmJnZCHLxNzMrIRd/M7MScvE3MyshF38zsxJy8TczKyEXfzOzEnLxNzMrIRd/M7MScvE3MyuhTMVf\n0tslPS6pLenmXfY7KumCpJakO7Ic08zMssv6yf8x4BTw33baQdIEcDfwVuBG4HZJb8x4XDMzyyDT\nBO4R8R0ASbs9UGgaeCIinkz2XQJmgQtZjm1mZv3bjzH/64CLW9afSraZmdmQXPWTv6QzwIGtm4AA\nPhwRD+TRqMXFxc3XjUaDRqORx2HMzEZSs9mk2WxmyhjI8/wlPQz8dkSc2+a9Q8BiRBxN1u+kM+vM\nJ3bI8vP8zcxS6Od5/pnG/LuPv8P2s8DrJd0APAPcBty+W9Ds7Ozma8/k5ewiZued72xnp81OLe28\nj3HlfLsn6Yzn/4hOYf9Ksv1VwJe27HcU+A7wBHDnVTK75vBdDliOSuVE1xy+q1Gtrka9Pp/DnJjO\ndvbw8p3t7LTZjM8E7scCTvXMej85eTJqtbme7fX6fLTb7Uw/yHa7k+NsZ49r2509ztnpi39B7/A9\nSGd0aGvzJtjYmGZ9/XDP9lbrSP9ffRJra2u0Wg1nO3uo+c52dv/Z6RS0+JuZWa7SflXIe8HDPs4u\ncPYot93Z45ydfthn6MW+p0E9J3yXApaiUjnedcJ3JarVlajV5nI4geJsZw8v39nOTpvdT/EfyHX+\ngyQpZmZmNtd9qaezi5idd76znZ0me2pqikh5nX8hi3/R2mRmVmT93OTlE75mZiXk4m9mVkIu/mZm\nJeTib2ZWQi7+ZmYl5OJvZlZCLv5mZiXk4m9mVkIu/mZmJeTib2ZWQi7+ZmYl5OJvZlZCmYq/pLdL\nelxSW9LNu+z3fUnflLQm6RtZjmlmZtlVMv77x4BTwO9fZb/LQCMi/k/G45mZ2QBkKv4R8R0ASVd7\nlKjwEJOZWWHsV0EO4Iyks5Leu0/HNDOzHVz1k7+kM8CBrZvoFPMPR8QDezzOLRHxjKSfpvNHYD0i\nvrbTzrOzs5uvPZOXs4uYnXe+s52dNju1tPM+brcADwM373HfBeBDu7wf8PcDrg94Q8BHA5ajUjnR\nNYfvalSrq1Gvz+cwJ6aznT28fGc7+2oefvjheN/7PhAHDkxHpfKO4U3gnhT/N+3wXhW4Nnn9UuC/\nA2/ZJSvgWMCpnlnvJydPRq0217O9Xp+Pdrud6YfZbndynO3scW27s8c5O33xz3qp50lJF4FDwJck\nfSXZ/ipJX0p2OwB8TdIa8GfAAxHx4O7JB4HbuPKUxAQbG9Osrx/u2d5qHen/q09ibW2NVqvhbGcP\nNd/Zzu4/O52sV/t8EfjiNtufAY4nr78H1LMcx8zMBiztV4W8Fzzs4+wCZ49y2509ztnph32GXux7\nGgQB0wG/EjATsBSwFJXK8a4TvitRra5ErTaXw8kZZzt7ePnOdnba7H6Kv6JTcAtDUszMzGyu+1JP\nZxcxO+98Zzs7TfbU1BQRcbWbba9QyOJftDaZmRWZpNTF349cMDMrIRd/M7MScvE3MyshF38zsxJy\n8TczKyEXfzOzEnLxNzMrIRd/M7MScvE3MyshF38zsxJy8TczKyEXfzOzEnLxNzMrIRd/M7MScvE3\nMyuhrBO4/1tJ65LOS1qV9BM77HdU0gVJLUl3ZDmmmZlll/WT/4PAjRFRB54Afqd7B0kTwN3AW4Eb\ngdslvXG30NnZ2c3lbW97G5/5zGd44YUXNt+/fPkyjzzyCI888giXL1/O2IUr5Zn90EMPjWS795rd\nbDZzy+7HoLO7+zdKbd9Ldj+/v71mD0LW7N36V+R27yW7L2nnfdxpAU4Cn99m+yHgK1vW7wTu2CWn\naw7f5YDlqFROdM3huxrV6mrU6/M5zImZT/aBA9Mj2e69Zi8sLOSWnVYe2Vv7N2pt30t22t9fmuys\nBpG9U/+K3u69ZDPMCdyB08A/32b7PwPu2bL+TuCTu+QEHAs41TPr/eTkyajV5nq21+vz0W63M/0g\n2+1OTr7ZHx3Rdu8tO03xKFK79+rF/o1i2/eSnaX4j8LPZLv+jUK795advvhfddhH0hlJj25ZHkv+\n98SWfT4MPB8R9/X3/aPbQeA2rhyVmmBjY5r19cM921utI5sTJPdrbW2NVquRc/bWKTZHqd3O3q98\nZzu7/+x0Mk/gLundwHuBX42IjW3ePwQsRsTRZP1OOn+lPrFDnmdvNzNLKVJO4F7JcjBJR4F/Dfyj\n7Qp/4izwekk3AM/Q+Uh/+06ZaTtgZmbpZb3a5/eAa4Ezks5J+o8Akl4l6UsAEdEG5uhcGfQtYCki\n1jMe18zMMsg87GNmZqOnMHf4jtuNYJLulXRJ0qNbtv2kpAclfUfSf5X0smG2MQtJ10t6SNK3kosA\nfivZPvJ9lDQp6euS1pK+LSTbR75vW0maSL6xn07Wx6Z/kr4v6ZvJ7/AbybZx6t/LJP1RcpPttyS9\nOW3/ClH8+7kRbAR8mk5/troT+NOI+DngIba5KW6EvAB8KCJuBH4Z+FfJ72zk+5icv/rHEXETUAdu\nlTTNGPStyweBb29ZH6f+XQYaEXFTREwn28apf3cBX46InwdqwAXS9i/ttaF5LKS8EWxUFuAG4NEt\n6xeAA8nrVwIXht3GAfb1i8A/Gbc+AlXgz4F/ME59A64HzgAN4HSybZz69z3g5V3bxqJ/wE8A/2Ob\n7an6V4hP/sB1wMUt608l28bNKyLiEkBEPAu8YsjtGQhJr6XzCfnP6PzHN/J9TIZE1oBngTMRcZYx\n6Vvi39O5Um/rSb9x6l/QuRDlrKR/mWwbl/69DvgrSZ9Ohu3ukVQlZf+KUvzLauTPtku6FlgBPhgR\nP6C3TyPZx4i4HJ1hn+uBaUk3MiZ9k/TrwKWIOM+Vdx52G8n+JW6JiJuBY3SGJA8zJr8/Opfo3wz8\nh6SPP6QzWpKqf0Up/k8Dr9myfn2ybdxcknQAQNIrgf815PZkIqlCp/B/PiLuTzaPVR8j4v8CTeAo\n49O3W4AZSX8B/BfgVyV9Hnh2TPpHRDyT/O//pjMkOc34/P6eAi5GxJ8n66t0/hik6l9Riv/mjWCS\nrqFzI9jpIbdpEMSVn6xOA+9OXv8mcH/3PxgxfwB8OyLu2rJt5Pso6e+9eKWEpL8L/FNgnTHoG0BE\n/G5EvCYiXnyOykMR8S+ABxiD/kmqJt9IkfRS4C3AY4zP7+8ScFHSG5JNv0bnHqpU/SvMdf7J3cJ3\n0fmDdG9EfHzITcpE0n10Tqa9HLgELND5BPJHwKuBJ4F3RMRzw2pjFpJuAb5K5/9UkSy/C3wD+AIj\n3EdJvwh8ls5/ixPAckT8G0k/xYj3rZukI8BvR8TMuPRP0uuAP6bz32QF+MOI+Pi49A9AUg34z8Df\nAf4CeA/wElL0rzDF38zM9k9Rhn3MzGwfufibmZWQi7+ZWQm5+JuZlZCLv5lZCbn4m5mVkIu/mVkJ\nufibmZXQ/wcNMN85HO9v7wAAAABJRU5ErkJggg==\n", + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "agrid = hstack((linspace(Para.a_min,Para.a_min+1.,15),\n", + " linspace(Para.a_min+2.,Para.a_max,15)))\n", + "zgrid = linspace(Para.z_min,Para.z_max,10)\n", + "\n", + "Para.domain = vstack([(a,z) for a in agrid for z in zgrid])\n", + "\n", + "from matplotlib.pyplot import plot\n", + "%matplotlib inline\n", + "plot(Para.domain[:,0],Para.domain[:,1],'o')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can then approximate the value function $T(V)$ by evaluating $T(V)$ at each point in the domain and interpolating using cubic splines. If we wished to perform this serially the function would look something like this:" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from scipy.interpolate import SmoothBivariateSpline\n", + "\n", + "def approximateValueFunction_Serial(TV,Para):\n", + " '''\n", + " Approximates the value function over the grid defined by Para.domain.\n", + " '''\n", + " #get the value at each point in the domaain\n", + " Vs = hstack(map(TV,Para.domain))\n", + " #approximate using cubic splines\n", + " Vf = SmoothBivariateSpline(Para.domain[:,0],Para.domain[:,1],Vs)\n", + " return Vf,Vs" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that $T(V)$ takes a numpy array as an input so applying $T(V)$ to each element of `Para.domain` is a straightforward implementation of the `map` command. It is important to realize that once the continuation value function is known the evaluation of $T(V)$ on each of the points in `Para.domain` is independent. We can therefore easily parallelize the function as follows (this function is contained in `bellman.py`):" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from mpi4py import MPI\n", + "\n", + "def approximateValueFunction(TV,Para):\n", + " '''\n", + " Approximates the value function over the grid defined by Para.domain. Uses\n", + " mpi. Returns both an interpolated value function and the value of TV at each point\n", + " in the domain.\n", + " '''\n", + " comm = MPI.COMM_WORLD\n", + " #first split up domain for each process\n", + " s = comm.Get_size()\n", + " rank = comm.Get_rank()\n", + " n = len(Para.domain)\n", + " m = n//s\n", + " r = n%s\n", + " #let each process take a slice of the domain\n", + " mydomain = Para.domain[rank*m+min(rank,r):(rank+1)*m+min(rank+1,r)]\n", + "\n", + " #get the value at each point in my domaain\n", + " myV = hstack(map(TV,mydomain))\n", + " #gather the values for each process\n", + " Vs = comm.gather(myV)\n", + " \n", + " if rank == 0:\n", + " #fit the values\n", + " Vs = hstack(Vs).flatten()\n", + " Vf = SmoothBivariateSpline(Para.domain[:,0],Para.domain[:,1],Vs)\n", + " else:\n", + " Vf = None\n", + " return comm.bcast(Vf),comm.bcast(Vs)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Walking through the code the first 8 lines split up the domain into chunks for each process to work on. First we divide the number of points in the domain `n` by the number of process `s`. This gives us the number of points per process `m`. If the number of processes were to divide evenly into the number of points then the process of rank 0 would take the first `m` points, the process with rank 1 would take the next `m` points, etc. If they do not divide evenly the command\n", + "\n", + "`\n", + "mydomain = Para.domain[rank*m+min(rank,r):(rank+1)*m+min(rank+1,r)]\n", + "`\n", + "\n", + "adds an extra point to the first `r` processes where `r = n mod s`.\n", + "\n", + "The next line evaluates $T(V)$ at each point in the `mydomain`, the processes' slice of `Para.domain`. The program then gathers the values of `TV` computed by each process. Since the list produce by gather is ordered by the rank of the processes, when we combine via `hstack` we get a list of values equivalent to `hstack(map(TV,Para.domain))`. The rank zero process then fits the value function using `SmoothBivariateSpline` and broadcasts it to the other processes. We can iterate the value function:" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "21.4696863104\n", + "14.0421106942\n", + "10.5408456781\n", + "7.53581446045\n", + "5.87085402644\n", + "4.67845883118\n", + "3.75852972605\n", + "3.03921754918\n", + "2.47192092856\n", + "2.02123946321\n" + ] + } + ], + "source": [ + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "Vf,Vs = bellman.approximateValueFunction(T(V0),Para)\n", + "for t in range(0,10): \n", + " Vfnew,Vsnew = bellman.approximateValueFunction(T(Vf),Para)\n", + " if rank == 0:\n", + " print(max(abs(Vs-Vsnew)))\n", + " Vf = Vfnew\n", + " Vs = Vsnew" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note even though the function is written using `mpi4py` it can work as well with a single process as with ten." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## IPython Parallel" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Our final toolkit for parallel computations with python is `IPython` parallel. The setup for `IPython` parallel is different than that of `mpi4py`. Rather than have `N` process running the same code simultaneously, `IPython` parallel has `N` 'engines' in the back ground which you can issue commands to. This toolkit can be incredibly powerful when combined with mpi4py since you can then run mpi4py code interactively on the engines. To create and `IPython` cluster with `8` nodes run the following command in a terminal:\n", + "\n", + "`ipcluster start -n 8 --engines=MPIEngineSetLauncher`\n", + "\n", + "By default ipcluster does not use `mpi`, hence the additional engines option. We can then connect to the cluster with the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "from ipyparallel import Client\n", + "c = Client()\n", + "print(c.ids)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`c.ids` gives a list of the engines we are connected to. WARNING: The ids associated with `IPython` parallel are NOT the same as the ranks in `mpi4pi`. To interact with the engines we need to take a view of the engines" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "v = c[:]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This gives a view of all the engines, allowing us to interact with them (if we wished for a view of only the first 4 nodes we can run `v = c[:4]`). For instance:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "v['a'] = 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "creates a variable `a` on each of the nodes holding the value 1. We can also use `v` to execute code remotely" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "v.execute('import numpy as np')\n", + "v.execute('b = np.random.rand(2)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This creates a randomly generated variable $b$ on each of the engines. This variable can be accessed through `v`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "v['b']" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "run_control": { + "breakpoint": false + } + }, + "source": [ + "To make sure all the engines are in the current directory we run:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "notebook_path=!pwd\n", + "v.execute('%cd '+notebook_path[0])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### `autopx`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Hands down the most useful part of `IPython` parallel is the `%autopx` magic command. Once issued, all commands entered are run on the engines instead of the clients. For example, I can now run `mpi4py` on the engines" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "%autopx" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from mpi4py import MPI\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "print rank" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now interact with our parallel economics code remotely." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from numpy import *\n", + "from primitives import primitives_CRRA\n", + "import bellman\n", + "\n", + "rank = comm.Get_rank()\n", + "\n", + "#initialize the parameters\n", + "Para = primitives_CRRA()\n", + "r = 0.9995*(1/Para.beta-1)\n", + "\n", + "#create the grid\n", + "agrid = hstack((linspace(Para.a_min,Para.a_min+1.,15),linspace(Para.a_min+2.,Para.a_max,15)))\n", + "zgrid = linspace(Para.z_min,Para.z_max,10)\n", + "Para.domain = vstack([(a,z) for a in agrid for z in zgrid])\n", + "\n", + "#initial value function\n", + "def V0(a,z):\n", + " return Para.U(exp(z)+r*a)/(1-Para.beta)\n", + "\n", + "#construct the Bellman Map\n", + "T = bellman.BellmanMap(Para,r)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "Vf,Vs = bellman.approximateValueFunction(T(V0),Para)\n", + "for t in range(0,10): \n", + " Vfnew,Vsnew = bellman.approximateValueFunction(T(Vf),Para)\n", + " if rank == 0:\n", + " print max(abs(Vs-Vsnew))\n", + " Vf = Vfnew\n", + " Vs = Vsnew" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can then pull the value function from the engines and plot it" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "%autopx" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "Vf = v['Vf'][0]\n", + "plot(linspace(0,50,1000),Vf(linspace(0,50,1000),1))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "%autopx" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This can be especially useful if I want to debug a piece of parallel code. Take the first part of `approximateValueFunction`. I can now evaluate pieces of it and see what they're doing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "#first split up domain for each process\n", + "s = comm.Get_size()\n", + "rank = comm.Get_rank()\n", + "n = len(Para.domain)\n", + "m = n//s\n", + "r = n%s" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"rank\",rank,\":\", (rank*m+min(rank,r),(rank+1)*m+min(rank+1,r)))\n", + "%autopx" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "I now can see the rows of `Para.domain` assigned to each process." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Caveats" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One important thing to keep in mind is that the engines have their own local namespace. You need to make sure that the appropriate libraries are imported in the engines and that they work in the right directory. Finally, while the code is being run, the engines can't print output. All printed output is delayed until the command is done. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Multiple Hosts" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Thus far, everything we have done has assumed that all of the processes will be run on the same machine. Fortunately, to run the code one multiple machines requires only a modification to the mpiexec command. The precise nature differs across versions of mpi. We will be describing the procedure for `mpich`. The steps for `openmpi` will be similar. Suppose we had 4 machines with ipaddresses given by host1,host2,host3, and host4, and each of these machines had 4 processors on them. If we want to run python code parallely across these machines we would create a file `hostfile` with the following lines\n", + "\n", + "`\n", + "host1 : 4\n", + "host2 : 4\n", + "host3 : 4\n", + "host4 : 4\n", + "`\n", + "\n", + "To execute the python code in Main.py we run the command\n", + "\n", + "`\n", + "mpiexec -f hostfile python Main.py\n", + "`" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": false + }, + "source": [ + "## Appendix: .py files used " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/mpi_helloworld.py\n", + "\n", + "from mpi4py import MPI\n", + "\n", + "comm = MPI.COMM_WORLD #retreive the communicator module\n", + "rank = comm.Get_rank() #get the rank of the process\n", + "size = comm.Get_size() #get the number of processes\n", + "\n", + "print(\"Hello world from process \"+str(rank)+\" of \" + str(size))\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/MPI_montecarlo.py\n", + "\n", + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "N = 10000\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "x = np.random.randn(N)\n", + "my_result = np.cos(x).mean() #my average\n", + "\n", + "#now average accross all processes\n", + "results = comm.gather(my_result)\n", + "if rank == 0:\n", + " print(np.mean(results))\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/MPI_montecarlo_allgather.py\n", + "\n", + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "N = 10000\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "\n", + "x = np.random.randn(N)\n", + "my_result = np.cos(x).mean() #my average\n", + "\n", + "#now gather from all processes\n", + "results = comm.gather(my_result)\n", + "\n", + "print(\"Gather result for process \" + str(rank) +\":\",results)\n", + "\n", + "#now allgather from all processes\n", + "results = comm.allgather(my_result)\n", + "\n", + "print(\"Allgather resuts for process \" + str(rank) + \":\",results)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/MPI_Allgather.py\n", + "\n", + "import numpy as np\n", + "from mpi4py import MPI\n", + "\n", + "comm = MPI.COMM_WORLD\n", + "rank = comm.Get_rank()\n", + "size = comm.Get_size()\n", + "\n", + "# create a 2x2 array filled with the rank of the process\n", + "x = np.ones((2,2)) * rank \n", + "\n", + "# create an empty array to store the gathered result\n", + "X = np.empty((2*size,2))\n", + "\n", + "#use numpy Allgather\n", + " #send buffer #recv buffer\n", + "comm.Allgather([x, MPI.DOUBLE],[X, MPI.DOUBLE])\n", + "\n", + "if rank ==0:\n", + " print(X)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/primitives.py\n", + "\n", + "\"\"\"\n", + "Created on Tue Jun 25 13:29:54 2013\n", + "\n", + "@author: dgevans\n", + "\"\"\"\n", + "from numpy import *\n", + "\n", + "class primitives(object):\n", + " '''\n", + " Class that holds the basic parameters of the problem\n", + " '''\n", + " \n", + " beta = 0.95#discount factor\n", + " \n", + " rho = 0.9#persistence of idiosyncratic productivity\n", + " \n", + " sigma_e = 0.1#standard deviation of idiosyncratic productivity\n", + " \n", + " a_min = 0.0 #minimum assets\n", + " \n", + " a_max = 60.0 #maximum assets\n", + " \n", + " z_min = -2. #minimum productivity\n", + " \n", + " z_max = 2. #maximum productivity\n", + " \n", + " \n", + " \n", + "class primitives_CRRA(primitives):\n", + " '''\n", + " Extension of primitives class holding the preference structure\n", + " '''\n", + " \n", + " sigma = 2.\n", + " \n", + " def U(self,c):\n", + " \"\"\"\n", + " CRRA utility function\n", + " \"\"\"\n", + " sigma = self.sigma\n", + " if sigma == 1.:\n", + " return log(c)\n", + " else:\n", + " return (c)**(1-sigma)/(1-sigma)\n", + " \n", + " def Uc(self,c):\n", + " \"\"\"\n", + " Derivative of the CRRA utility function\n", + " \"\"\"\n", + " sigma = self.sigma\n", + " return c**(-sigma)\n", + " \n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/bellman.py\n", + "\n", + "\"\"\"\n", + "Created on Tue Jun 25 13:40:30 2013\n", + "Holds all of the code for the bellman equation\n", + "@author: dgevans\n", + "\"\"\"\n", + "\n", + "from scipy.optimize import minimize_scalar\n", + "\n", + "from numpy import * \n", + "from mpi4py import MPI\n", + "from scipy.interpolate import SmoothBivariateSpline\n", + "from numpy.polynomial.hermite import hermgauss\n", + "\n", + "#compute nodes for gaussian quadrature.\n", + "zhatvec,z_weights = hermgauss(10)\n", + "z_weights /= sum(z_weights)\n", + "\n", + "\n", + "def approximateValueFunction(TV,Para):\n", + " '''\n", + " Approximates the value function over the grid defined by Para.domain. Uses\n", + " mpi. Returns both an interpolated value function and the value of TV at each point\n", + " in the domain.\n", + " '''\n", + " comm = MPI.COMM_WORLD\n", + " #first split up domain for each process\n", + " s = comm.Get_size()\n", + " rank = comm.Get_rank()\n", + " n = len(Para.domain)\n", + " m = n//s\n", + " r = n%s\n", + " #let each process take a slice of the domain\n", + " mydomain = Para.domain[rank*m+min(rank,r):\n", + " (rank+1)*m+min(rank+1,r)]\n", + "\n", + " #get the value at each point in my domain\n", + " myV = hstack(map(TV,mydomain))\n", + " #gather the values for each process\n", + " Vs = comm.gather(myV)\n", + " \n", + " if rank == 0:\n", + " #fit \n", + " Vs = hstack(Vs).flatten()\n", + " Vf = SmoothBivariateSpline(Para.domain[:,0],Para.domain[:,1],Vs)\n", + " else:\n", + " Vf = None\n", + " return comm.bcast(Vf),comm.bcast(Vs)\n", + " \n", + " \n", + " \n", + " \n", + "\n", + "class BellmanMap(object):\n", + " '''\n", + " Bellman map object. Once constructed will take a value function and return\n", + " a new value function\n", + " '''\n", + " \n", + " def __init__(self,Para,r):\n", + " '''\n", + " Initializes the Bellman Map function. Need parameters and r\n", + " '''\n", + " self.Para = Para\n", + " self.r = r\n", + " self.w = 1.\n", + " \n", + " def __call__(self,Vf):\n", + " '''\n", + " Given a current value function return new value function\n", + " '''\n", + " self.Vf = Vf\n", + " \n", + " return lambda x: self.maximizeObjective(x[0],x[1])[0]\n", + " \n", + " \n", + " def maximizeObjective(self,a,z):\n", + " '''\n", + " Maximize the objective function Vf given the states a and productivity z.\n", + " Note the state for this problem are assets and previous periods productivity\n", + " return tuple (V,c_policy,a_policy)\n", + " '''\n", + " \n", + " #total wealth for agent is easy\n", + " W = (1+self.r)*a + exp(z)*self.w\n", + " beta = self.Para.beta\n", + " Vf = self.Vf\n", + " \n", + " \n", + " def obj_f(aprime):\n", + " c = W-aprime\n", + " return -(self.Para.U(c) + beta*self.E(lambda zprime:Vf(aprime,zprime).flatten(),z))\n", + " \n", + " a_min = self.Para.a_min\n", + " a_max = min(self.Para.a_max,W-0.00001)\n", + " \n", + " res = minimize_scalar(obj_f,bounds=(a_min,a_max),method='bounded')\n", + " \n", + " return -res.fun,W-res.x,res.x\n", + " \n", + " def E(self,f,z_):\n", + " '''\n", + " Compute the exepected value of a function f of zprime conditional on z\n", + " '''\n", + " #Pz_min,Pz_max,z_pdf = self.get_z_distribution(z_)\n", + " z_min = array([self.Para.z_min])\n", + " z_max = array([self.Para.z_max])\n", + " #Compute the nodes for Gaussian quadrature\n", + " zvec = self.Para.rho * z_ + self.Para.sigma_e*zhatvec*sqrt(2)\n", + " \n", + " #lower bound for z\n", + " zvec[zvecz_max] = z_max\n", + " return z_weights.dot(f(zvec))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "run_control": { + "breakpoint": false + } + }, + "outputs": [], + "source": [ + "%%writefile ./mpi/Main.py\n", + "\n", + "\"\"\"\n", + "Created on Tue Jun 25 16:50:32 2013\n", + "\n", + "@author: dgevans\n", + "\"\"\"\n", + "\n", + "from numpy import *\n", + "from primitives import primitives_CRRA\n", + "import bellman\n", + "from mpi4py import MPI\n", + "\n", + "rank = MPI.COMM_WORLD.Get_rank()\n", + "\n", + "#initialize the parameters\n", + "Para = primitives_CRRA()\n", + "r = 0.9995*(1/Para.beta-1)\n", + "\n", + "#create the grid\n", + "agrid = hstack((linspace(Para.a_min,Para.a_min+1.,15),linspace(Para.a_min+2.,Para.a_max,15)))\n", + "zgrid = linspace(Para.z_min,Para.z_max,10)\n", + "Para.domain = vstack([(a,z) for a in agrid for z in zgrid])\n", + "\n", + "#initial value function\n", + "def V0(a,z):\n", + " return Para.U(exp(z)+r*a)/(1-Para.beta)\n", + "\n", + "#construct the Bellman Map\n", + "T = bellman.BellmanMap(Para,r)\n", + "\n", + "#iterate on the Bellman Map until convergence\n", + "Vf,Vs = bellman.approximateValueFunction(T(V0),Para)\n", + "while diff > 1e-6: \n", + " Vfnew,Vsnew = bellman.approximateValueFunction(T(Vf),Para)\n", + " diff = max(abs(Vs-Vsnew))\n", + " if rank == 0:\n", + " print diff\n", + " Vf = Vfnew\n", + " Vs = Vsnew\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.5.1" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/mpi/MPI_Allgather.py b/mpi/MPI_Allgather.py new file mode 100644 index 0000000..b2dce25 --- /dev/null +++ b/mpi/MPI_Allgather.py @@ -0,0 +1,20 @@ + +import numpy as np +from mpi4py import MPI + +comm = MPI.COMM_WORLD +rank = comm.Get_rank() +size = comm.Get_size() + +# create a 2x2 array filled with the rank of the process +x = np.ones((2,2)) * rank + +# create an empty array to store the gathered result +X = np.empty((2*size,2)) + +#use numpy Allgather + #send buffer #recv buffer +comm.Allgather([x, MPI.DOUBLE],[X, MPI.DOUBLE]) + +if rank ==0: + print(X) \ No newline at end of file diff --git a/mpi/MPI_montecarlo.py b/mpi/MPI_montecarlo.py new file mode 100644 index 0000000..8f97acb --- /dev/null +++ b/mpi/MPI_montecarlo.py @@ -0,0 +1,16 @@ + +import numpy as np +from mpi4py import MPI + +N = 10000 + +comm = MPI.COMM_WORLD +rank = comm.Get_rank() + +x = np.random.randn(N) +my_result = np.cos(x).mean() #my average + +#now average accross all processes +results = comm.gather(my_result) +if rank == 0: + print(np.mean(results)) \ No newline at end of file diff --git a/mpi/MPI_montecarlo_allgather.py b/mpi/MPI_montecarlo_allgather.py new file mode 100644 index 0000000..65fe05c --- /dev/null +++ b/mpi/MPI_montecarlo_allgather.py @@ -0,0 +1,21 @@ + +import numpy as np +from mpi4py import MPI + +N = 10000 + +comm = MPI.COMM_WORLD +rank = comm.Get_rank() + +x = np.random.randn(N) +my_result = np.cos(x).mean() #my average + +#now gather from all processes +results = comm.gather(my_result) + +print("Gather result for process " + str(rank) +":",results) + +#now allgather from all processes +results = comm.allgather(my_result) + +print("Allgather resuts for process " + str(rank) + ":",results) \ No newline at end of file diff --git a/mpi/Main.py b/mpi/Main.py new file mode 100644 index 0000000..c743937 --- /dev/null +++ b/mpi/Main.py @@ -0,0 +1,39 @@ + +""" +Created on Tue Jun 25 16:50:32 2013 + +@author: dgevans +""" + +from numpy import * +from primitives import primitives_CRRA +import bellman +from mpi4py import MPI + +rank = MPI.COMM_WORLD.Get_rank() + +#initialize the parameters +Para = primitives_CRRA() +r = 0.9995*(1/Para.beta-1) + +#create the grid +agrid = hstack((linspace(Para.a_min,Para.a_min+1.,15),linspace(Para.a_min+2.,Para.a_max,15))) +zgrid = linspace(Para.z_min,Para.z_max,10) +Para.domain = vstack([(a,z) for a in agrid for z in zgrid]) + +#initial value function +def V0(a,z): + return Para.U(exp(z)+r*a)/(1-Para.beta) + +#construct the Bellman Map +T = bellman.BellmanMap(Para,r) + +#iterate on the Bellman Map until convergence +Vf,Vs = bellman.approximateValueFunction(T(V0),Para) +while diff > 1e-6: + Vfnew,Vsnew = bellman.approximateValueFunction(T(Vf),Para) + diff = max(abs(Vs-Vsnew)) + if rank == 0: + print diff + Vf = Vfnew + Vs = Vsnew \ No newline at end of file diff --git a/mpi/__init__.py b/mpi/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/mpi/bellman.py b/mpi/bellman.py new file mode 100644 index 0000000..eee5974 --- /dev/null +++ b/mpi/bellman.py @@ -0,0 +1,115 @@ + +""" +Created on Tue Jun 25 13:40:30 2013 +Holds all of the code for the bellman equation +@author: dgevans +""" + +from scipy.optimize import minimize_scalar + +from numpy import * +from mpi4py import MPI +from scipy.interpolate import SmoothBivariateSpline +from numpy.polynomial.hermite import hermgauss + +#compute nodes for gaussian quadrature. +zhatvec,z_weights = hermgauss(10) +z_weights /= sum(z_weights) + + +def approximateValueFunction(TV,Para): + ''' + Approximates the value function over the grid defined by Para.domain. Uses + mpi. Returns both an interpolated value function and the value of TV at each point + in the domain. + ''' + comm = MPI.COMM_WORLD + #first split up domain for each process + s = comm.Get_size() + rank = comm.Get_rank() + n = len(Para.domain) + m = n//s + r = n%s + #let each process take a slice of the domain + mydomain = Para.domain[rank*m+min(rank,r): + (rank+1)*m+min(rank+1,r)] + + #get the value at each point in my domain + myV = hstack(map(TV,mydomain)) + #gather the values for each process + Vs = comm.gather(myV) + + if rank == 0: + #fit + Vs = hstack(Vs).flatten() + Vf = SmoothBivariateSpline(Para.domain[:,0],Para.domain[:,1],Vs) + else: + Vf = None + return comm.bcast(Vf),comm.bcast(Vs) + + + + + +class BellmanMap(object): + ''' + Bellman map object. Once constructed will take a value function and return + a new value function + ''' + + def __init__(self,Para,r): + ''' + Initializes the Bellman Map function. Need parameters and r + ''' + self.Para = Para + self.r = r + self.w = 1. + + def __call__(self,Vf): + ''' + Given a current value function return new value function + ''' + self.Vf = Vf + + return lambda x: self.maximizeObjective(x[0],x[1])[0] + + + def maximizeObjective(self,a,z): + ''' + Maximize the objective function Vf given the states a and productivity z. + Note the state for this problem are assets and previous periods productivity + return tuple (V,c_policy,a_policy) + ''' + + #total wealth for agent is easy + W = (1+self.r)*a + exp(z)*self.w + beta = self.Para.beta + Vf = self.Vf + + + def obj_f(aprime): + c = W-aprime + return -(self.Para.U(c) + beta*self.E(lambda zprime:Vf(aprime,zprime).flatten(),z)) + + a_min = self.Para.a_min + a_max = min(self.Para.a_max,W-0.00001) + + res = minimize_scalar(obj_f,bounds=(a_min,a_max),method='bounded') + + return -res.fun,W-res.x,res.x + + def E(self,f,z_): + ''' + Compute the exepected value of a function f of zprime conditional on z + ''' + #Pz_min,Pz_max,z_pdf = self.get_z_distribution(z_) + z_min = array([self.Para.z_min]) + z_max = array([self.Para.z_max]) + #Compute the nodes for Gaussian quadrature + zvec = self.Para.rho * z_ + self.Para.sigma_e*zhatvec*sqrt(2) + + #lower bound for z + zvec[zvecz_max] = z_max + return z_weights.dot(f(zvec)) \ No newline at end of file diff --git a/mpi/mpi_helloworld.py b/mpi/mpi_helloworld.py new file mode 100644 index 0000000..f63f538 --- /dev/null +++ b/mpi/mpi_helloworld.py @@ -0,0 +1,8 @@ + +from mpi4py import MPI + +comm = MPI.COMM_WORLD #retreive the communicator module +rank = comm.Get_rank() #get the rank of the process +size = comm.Get_size() #get the number of processes + +print("Hello world from process "+str(rank)+" of " + str(size)) \ No newline at end of file diff --git a/mpi/primitives.py b/mpi/primitives.py new file mode 100644 index 0000000..b6fdc7a --- /dev/null +++ b/mpi/primitives.py @@ -0,0 +1,54 @@ + +""" +Created on Tue Jun 25 13:29:54 2013 + +@author: dgevans +""" +from numpy import * + +class primitives(object): + ''' + Class that holds the basic parameters of the problem + ''' + + beta = 0.95#discount factor + + rho = 0.9#persistence of idiosyncratic productivity + + sigma_e = 0.1#standard deviation of idiosyncratic productivity + + a_min = 0.0 #minimum assets + + a_max = 60.0 #maximum assets + + z_min = -2. #minimum productivity + + z_max = 2. #maximum productivity + + + +class primitives_CRRA(primitives): + ''' + Extension of primitives class holding the preference structure + ''' + + sigma = 2. + + def U(self,c): + """ + CRRA utility function + """ + sigma = self.sigma + if sigma == 1.: + return log(c) + else: + return (c)**(1-sigma)/(1-sigma) + + def Uc(self,c): + """ + Derivative of the CRRA utility function + """ + sigma = self.sigma + return c**(-sigma) + + \ No newline at end of file