top of page
Search
  • Writer's pictureCameron Gordon

Artificial Economics

Those that know me will know that I've run a couple blogs over the years - this marks the first time I've decided to do so under my own name. This will very much be a hobby horse - a way of collecting a fews thoughts and ideas as I undertake a Masters. Some of it will be wrong, some of it will be trivially obvious - almost all of it will be misguided. But hopefully there will be enough here to spark the interest of other people.


For the first post I want to outline the idea that economics can benefit from ideas found in computer science and artificial intelligence. With good reason AI techniques such as Bayesian learning, artificial neural nets, and deep learning are set to revolutionalise the field: the predictive and classification abilities of these models is powerful, and likely to force its way into modern statisticians and econometricians toolboxes.


But for me the exciting part of artificial intelligence is what these models can tell us about how humans, firms, and economies make decisions. At heart, this is the central question behind economics - 'Why do people do what they do?', 'How did things come to be how they are?', and 'Why did they spend money on that?!' - and it's one which (as any psychologist or behavioural economist will attest) is somewhat lacking, with unsatisfying utopian assumptions, and (invisibly) hand-wavy processes.


This is also the question that consumes computer scientists and artificial intelligence researchers: how can we build an unthinking machine, and have it make an intelligent decision? As I'll argue the way that artificial agents make decisions has parallels with how we observe people making decisions. Most importantly, the limitations that artificial agents face produce results that are similar to behavioural economic paradoxes. By observing situations that artificial agents find difficult to make decisions in we can better understand how (intelligent) humans make economic decisions.


To illustrate this, I'll give an example of how the standard rational agents model can be made more consistent with observed human behaviour by looking at a constraint that any artificial agent running on a CPU has to contend with: computational overhead.


When I was a student studying economics, I remember one heated discussion between a couple of academics. One had made the mistake of saying that a behaviour wasn't 'rational', the other piqued that 'Rationality means they have a complete and transitive preference set! Nothing more, nothing less.'


Under a rational agents model, an economic agent faced with rational preferences will maximise their happiness, subject to constraints on their budget, information, and the physical laws of the world. The unstated assumption is that 'finding' this ideal maximal position a simple or trivial exercise. In reality nothing could be further from the truth - it's hard enough to decide between two different brand of yogurt (let alone an infinitely bounded set of purchasable goods), and economics is rightly criticised for over-reliance on this omniscience.


Now consider a couple well-known behavioural economic paradoxes: hyperbolic discounting (the observed tendency for people to express a strong preference for immediate rewards) and loss-aversion (the overvaluation of avoiding a loss compared to acquiring an equivalent gain). Both are deemed paradoxes as they do not fit within the expectations of a rational agent model.


Artificial intelligence often faces a very similar problem to a rational agents model: we wish to maximise a metric (e.g. performance on an Atari game, or profit at a coal terminal) subject to constraints. But here finding this maximal position is a difficult task depending on computational power: the objective of the engineer is not only to find an effective solution, but to do so efficiently (minimising computation time and space).


Now to the parallel with human behaviour. Among the most efficient (and effective, but suboptimal) strategies are the use of simple heuristics to guide a program's optimization. For example: a program that myopically maximises profit per month (or at a stretch, every two or three months) will do fairly well - even though in theory, a far more long-term view (e.g. years or decades) could leave it in the dust (but potentially well outside the computational capacity of even supercomputers).


Like computers, we are limited thinking machines. Our brains are voracious consumers of energy, and are limited in computation and capacity. Try multiplying 46*93 in your head, or reciting more than 10 types of cheese - it's possible, but not costless in time and mental energy. Thinking Fast and Slow, the touchstone piece of behavioural economics literature by Daniel Kahneman ,outlines a number of examples and mechanisms of these limitations.


Consider a computationally constrained artificial agent tackling a hard optimization problem over a long time period. A myopic strategy (or one which can look only a couple steps in advance) will be biased to immediate reward (i.e. a dollar today rather two in a year's time), and will seek to avoid losses (even if a small loss today opens potentially large gains in the future). Both actions are suboptimal compared to a hypothetical best (they miss the global optimum), but drop out naturally in response to computational limits. At least at a glance, this result looks similar to what we observe with hyperbolic discounting and loss aversion.


Now, this similarity does not mean it's the reason that we observe these two paradoxes - it's possible to get the same result by adding other parameters (e.g. psychological inertia, uncertainty aversion). However, it does provide a neat and simple analogy for the behaviour by examining what happens when we program an artificial agent to face a decision problem.


Our understanding of why humans make decisions (and why we make 'irrational' decisions) is likely to grow as we develop further understanding of artificial intelligence. Evolution has provided our brains with a spongy mass of neurons able to efficiently route, predict, and plan through the complex hodgepodge of life - but imperfectly, and the 'intelligence' that arises from a mass of iPhone circuitry and statistical wizardry to form Siri is drawn from the same spring of knowledge. By understanding how these artificial unthinking models begin to show glimmers of intelligent decision making, economists can start bringing our own models of human behaviour back into the fields of bounded rationality.

256 views0 comments

Subscribe Form

©2019 by Cameron Gordon.

bottom of page