Amateur Economics

Economics, Policy, Feedback, and Mental Models

Is Eminent Domain necessary?

leave a comment »

Two cases have recently brought Eminent Domain to the front pages. One, the Kelo case from 2005, and the other, Brooklyn’s own Goldstein et al. v. Empire State Development Corporation, decided two weeks ago. In both cases it was successfully argued that the potential gains to a city of a property development are sufficient to justify taking people’s property that sits where the development will be built. In New London, Connecticut, a Pfizer facility was to be built on the land of the plaintiff, Susette Kelo. In Brooklyn, a shopping mall and basketball stadium are to be built. In both cases it was found that the land was blighted and that the developments will improve it, and be beneficial in a number of ways: they’ll clean it up, they’ll bring construction jobs, they’ll bring retail jobs, they will draw people to the area and thereby benefit the surrounding community. That sounds great, but how do you measure these benefits? And does the rule of law have the tools to do so?

In a funny twist, Pfizer announced a few weeks ago that they are not going to continue their development project–they cited the economic downturn. However, Ms. Kelo’s house has already been bulldozed. So it would seem that, though as a legal matter it is necessary that a project be sufficiently beneficial to a blighted community in order to take property under Eminent Domain, as a practical matter it’s only necessary that the idea of a developmet be sufficiently lovely to justify such an action. Does the rule of law have any way of compelling a developer to follow through on its plans? Can we fine a developer for not following through? Well if they didn’t follow through, they’re probably bankrupt or close enough to it that a fine expensive enough to be compelling would probably be too much to afford. Can we send the development company to jail? … No.

So we can’t use the rule of law to make sure the project meets the expectations that justified it in the first place. But can we rely on the rule of law to determine whether those expectations, if met, will justify taking people’s property? In both cases mentioned above, we have to compare the current state of the land to what will become of it. But they’re tied together in another way: they both define “Public Use” (a condition for justifying the project) not just as a literal public use like a highway or park owned and used by the public, but as a private project that will be sufficiently beneficial to the public by virtue of the community’s economic gain.

The moral argument for taking property under Eminent Domain is just like the financial argument for purchasing property without Eminent Domain. Under Eminent Domain, we use the rule of law to engage disinterested judges to evaluate the pros and cons of the project. The judges will listen in a courtroom to the plaintiff, describing in words the meaning and value of their home. Then they will listen to the developer describing in words the meaning and value of the development.

Now, how do you determine whether land is blighted? You might think a person’s land is blighted, but do they? As a practical matter, under the rule of law, the disinterested judges can turn to studies commissioned by the developer to see the extent of the blight eating away at the community. Surprisingly, that is what was done in the Brooklyn case. But even if judges turned to an objective study to determine the blightedness of a community, I believe price theory can provide a better measure.

Imagine there is no Eminent Domain: the developers would have to sell the idea to investors. They would have to argue that the project will be sufficiently profitable to justify paying for the land–including perhaps paying millions of dollars for that last holdout hovel. If a developer couldn’t convince investors that it will be sufficiently profitable, what ground do they stand on to make the moral argument that it will be sufficiently beneficial to the community? They aren’t building a park, which is not a profit making venture, they’re building a mall whose very justification is the profit it will bring.

Under Eminent Domain the state is obliged to fairly compensate the current land owners. Spinoza defined a godly price as the price two people agree to willingly. If a judge determined that land is blighted, though the landowners don’t think so, their compensation won’t be fair in their eyes, nor would it be godly in Spinoza’s eyes. But if landowners were to sell to the developer at a freely negotiated price, it won’t matter at all who does or does not think the land is blighted or whether anyone agrees with anyone else about it. They would come to terms directly with each other about the meaning and value of their respective property.

I contend that, in cases of private developments with potential benefit to the community, fair compensation under Eminent Domain should be the same as it would be without Eminent Domain. So it follows, in such cases, there should be no Eminent Domain.

Written by NAO

December 5, 2009 at 4:23 pm

Posted in Economics, Metrics, Policy

Health Care Reform — Bundled Payments

leave a comment »

The health care reform debate isn’t about reforming health care, it’s about reforming health insurance.  But insurance is only one of the many complicated aspects to the whole health care system in america.  What if health care were brand new?  How would we set it up?

The way our health care system is set up now, we incentivize specialization for doctors, we incentivize treatment instead of prevention, and we incentivize processes over outcomes. To really change the system such that we optimize the health and well being of the most Americans the most efficiently and effectively, we need to look at how costs are incurred by doctors and hospitals and how payments are made to them.  For example, if payments were based on outcomes rather than processes, the incentives would change–you incentivize reaching the outcome, rather than following the process.

Michael Porter of Harvard advocates a system called bundled payments.   He says in an interview with Bloomberg’s Tom Keene that, “We organize our health care delivery around specialties and interventions, not about bringing together what’s necessary to care for the patient’s medical condition.” (The interview isn’t available from Bloomberg or iTunes any longer, but it was from the January 29, 2009 “On the Economy with Tom Keene.”)  More from that interview:

We need to totally reorganize health care like we totally reorganized business.  In the old days, businesses were organized functionally:  There was the marketing, production, finance; then we figured out, No, that doesn’t make sense—we need to move to a business unit structure, we need to organize around the customer, we need to pull together production, finance and marketing around each production line.  And health care is still based in 200 year old organizational principals.

It’s the way doctors have traditionally been trained—they’ve been trained as a radiologist, so in the hospital there’s a radiology department.  They haven’t been trained to do cancer care, they’ve been trained to be a surgeon.  They’ve been trained to be an oncologist that does chemotherapy.

Right now we pay each of those doctors separately for their little service.  What we need to move to is we need to pay the team for the total care process.  That’s called bundled payment.  If you have breast cancer stage 1, you get a bundled payment of x.  You care for the patient, you’re responsible to get that patient well.  And then you doctors and experts, you decide the best way to spend the money in order to get the best patient outcomes.

The idea of bundled payments is seconded by a recent report in the New England Journal of Medicine.  The authors estimate that switching to bundled payments for a set of 10 particular conditions and procedures (they don’t state which ones) that require hospitalization could reduce national health care spending by up to 5.4% over what we would be spending between 2010 and 2019.  One reason they say this system would save money is that it would reduce the total “volume of services.”

If the doctors and experts are responsible for (and paid for) outcomes and not processes, and if they form a “business unit” team around the patient, the thinking is that they’ll coordinate their treatments, they’ll order only the necessary tests, and they’ll find a way to reach the outcome on budget.

While this sounds like it might be a better system, it would depend upon correctly determining the patient’s condition.  The whole process of diagnosis is excluded from the bundled payment system, because you can’t go down the road of getting the cancer team together until it’s been determined what stage of what cancer a patient has.

Written by NAO

November 30, 2009 at 1:28 am

Posted in Insurance

Third annual Formula 1 driver ranking

leave a comment »

This doesn’t have much to do with economics.  In formula 1 racing it’s hard to determine who the best driver is, because there is a significant difference between the cars.  The most talented driver might be driving a lousy car; the World Champion may simply have been driving the fastest car.  One thing you can look at is how a driver fares against his teammate.  But that doesn’t tell you how they all compare against each other.  What I’ve done, for the third year now, is analyze drivers’ change in position during races.  And here are the results:

Rank Driver Average Change in Position
1 Sebastien Bourdais 2.8
2 Timo Glock 2.1
3 Giancarlo Fisichella 1.3
4 Felipe Massa 1.3
5 Vitantonio Liuzzi 1.0
6 Kamui Kobayashi 1.0
7 Jenson Button 1.0
8 Heikki Kovalainen 0.5
9 Luca Badoer 0.5
10 Nick Heidfeld 0.3
11 Mark Webber 0.2
12 Lewis Hamilton 0.1
13 Fernando Alonso -0.1
14 Nelsinho Piquet -0.1
15 Jarno Trulli -0.2
16 Sebastien Buemi -0.2
17 Rubens Barrichello -0.3
18 Sebastian Vettel -0.4
19 Nico Rosberg -0.4
20 Kimi Räikkönen -0.9
21 Robert Kubica -1.1
22 Adrian Sutil -1.5
23 Jaime Alguersuari -1.7
24 Kazuki Nakajima -1.8
25 Romain Grosjean -2.8

My methodology has been streamlined since I first tried this.  Using information from Formula1.com, I’ve made a spreadsheet of the starting and finishing position for each driver, for each race.  I then make what I call an adjusted grid:  if a driver doesn’t finish a race he’s not counted, and those who started behind him are moved up.  For example, if you started in 10th place, but the driver in 5th place didn’t finish, I say that he didn’t start, and that you started in 9th place.

Every year the results have been interesting, but I still don’t know whether it’s a good measure of who the best driver is.  The biggest drawback is that the driver in first place has no opportunity to improve his position–and a driver who often starts at the front will probably have a lower score in this ranking because of that.

What I’d like to do now is take data from many years and do an analysis of that, the same way.  Then I’ll be able to average out some more car-based variance.  For example, having Fernando Alonso in a McLaren and in a Renault.

Written by NAO

November 19, 2009 at 1:30 am

Posted in Metrics

Regulation versus Incentives – Formula 1 Test Case

with 2 comments

Regulation versus incentives

I’m a big fan of Formula 1 Racing. What I especially like about it is that there is as much competition for technological innovation as there is for skillful driving. However in the last year, the governing body of Formula 1 (the FIA) has taken the global financial crisis as an opportunity to gut F1’s capacity for innovation.

During the 2008 season, the Japanese team Super Aguri failed to secure financing and had to quit; at the end of that season Honda pulled out, saying they couldn’t sustain the expenditure. This year BMW announced that this would be their last season; the bank RBS couldn’t continue funding the Williams team; and ING announced this would be their last year in F1, both as the title sponsor to the ING Renault team and as a major trackside and race sponsor. The contention of the FIA is that costs have gotten out of control and it is now too expensive to run an F1 team.

You have to be careful how you frame a problem, because you’re always framing the solution also, when you frame the problem. In this case the problem was framed as “F1 costs too much”, so the solution is to make it cheaper, by regulating the sport such that teams are limited in many of their costly activities.

For example, teams are limited now in the number of hours they can spend on wind-tunnel testing; teams are limited to using a total of 8 engines per car per year (only a few years ago they’d use a new engine for practice, another new engine for qualifilying, and another new engine for the race, every race weekend); also the teams are forbidden to do development work on the engines, and have been since the beginning of 2008. And, there is now no in-season track testing at all. Every one of these rules saves money. But crucially it does so at the expense of technological innovation.

These new regulations are on top of an already long list of constraints on the size and shape of engines, the size and shape of the cars, the size and position of the wings. But curiously, the only place where teams have wide freedom to innovate is with an FIA prescribed device that most teams are antipathetic to: the Kinetic Energy Recovery System, or KERS. Essetially, it makes an F1 car a hybrid, just like a Prius.

I disagree with the FIA and I proposed to them an alternative way to resolve the problem (below). To me, it’s a pretty basic idea that something can only be cheap or expensive relative to the return you get from it. My feeling is that F1 wasn’t offering a sufficient return relative to its cost.

What follows is a letter to the FIA president, the chairman of the F1 Teams Association, and the promoter of F1.

Gentlemen,

The greatest threat to Formula One’s future is not cost, it is a loss of relevance. F1 is expensive, but the question is _is it worth it?_ Current cost-cutting measures do in fact cut costs, but they do so at the expense of relevance—the interest of the fans, and opportunity for the manufacturers. Of all the important areas in which road cars must improve, gas mileage is the most important—more so than safety, reliability, comfort, or even performance. I propose a single regulation by which designers could place the emphasis back on ultimate performance, and do so in a way that is relevant to conditions in the world today.

Formula One is a manufacturer’s sport, unlike NASCAR. Innovations that teams make, like Lotus’ development of ground effects under a V-engine and McLaren’s carbon monocoque are essential to fans’ interest in F1, and also essential to manufacturers’ investment in it. The innovations I mention are from decades ago, but today, with regulations as they are, manufacturers have little room within which to innovate. And with the exception of KERS, innovations as significant as those mentioned above are not currently possible. The costs of F1 will be _worth it_ if the innovations of F1 are significant and valuable to the manufacturers.

The spirit of high performance innovation that is central to F1’s brand has always been constrained by available resources and technology. As an example, the first championship in 1950 was won with pre-war engines—in effect, the availability of resources served as a type of regulation. Today’s regulations lack that organic connection to the resources and technologies of the contemporary world because today’s regulations prescribe such limited areas within which to make improvements.

What I propose is to deregulate engines and aerodynamics, and to instead limit the amount of fuel a car can use during a race. Currently, an F1 car gets about 1.3 km/l. I suggest limiting the amount of fuel used per car per race to a total quantity based on 1.5 or even 2 km/l, multiplied by each race’s overall distance.

This new regulation would provide a much stronger connection between F1 cars and road cars than exists today, give manufacturers more room within which to innovate, and increase their return on investment. It would result in a Cambrian explosion of car designs: smaller engines with bigger KERS devices; turbos; slipperier aerodynamic designs; and if it were allowed, diesel engines (which might increase the interest of Audi or Peugeot, resulting in more cars on the grid). A team like Williams or Force India might introduce a ground-breaking technology and dominate a season—something that today seems nearly impossible (and the inspiration for the innovation may cost nothing at all). Altogether, F1 could uphold the claim of being the pinnacle of motorsport.

Formula One teams compete against each other, but they are not competing to be cheaper, they are com-peting to be better. Better than each other on track, and also better than other forms of entertainment and better platforms for development. In other words, more relevant. For 2010 and beyond, please bear my proposal in mind when considering changes to F1’s regulations.

Sincerely,

(Yours Truly)

Written by NAO

September 29, 2009 at 11:10 pm

Posted in Policy

podcast test 3

leave a comment »

Written by NAO

July 12, 2009 at 9:23 pm

Posted in Podcast

Limits of Mental Models on policy creation

leave a comment »

In a democracy, you can’t get better policies than ordinary citizens understand.
-Paul Collier on EconTalk.

But my discovery that many very simple programs produce great complexity immediately suggests a rather different explanation. For all it takes is that systems in nature operate like typical programs and then it follows that their behavior will often be complex. And the reason that such complexity is not usually seen in human artifacts is just that in building these we tend in effect to use programs that are specially chosen to give only behavior simple enough for us to be able to see that it will achieve the purposes we want.
-Stephen Wolfram, A New Kind of Science, Chapter 1, Page 3

The common thread of these two quotes, one dealing with political economy and the other with computer science, is modelling the unknown.

Often enough, policy and regulation are not thought of as being part of a system. I think people think of regulation in particular as being a constraint on a system. Right now people are debating whether there is enough regulation of financial markets. What people aren’t debating is how the system is set up– how the financial markets are set up.

Another common thread of those two quotes is that nature can be more complex than people understand. There are a lot of things to draw from here, but right now I’d like to talk about the difficulty of creating a policy that truly builds on nature in order to improve human interactions.

Coming back to financial regulation, there is no salt shaker from which you can pour “more regulation” onto a system. It’s probably true that the SEC and/or the Fed didn’t monitor investment banks closely enough. It is also coming to light that they were oblivious to Bernie Madoff’s scheme until well after it should have been obvious. However, I believe that the solution of giving the SEC more authority, or the solution of making a new super-regulator, or the solution of putting more constraints on the activities of financial businesses, all miss the boat. It would be wiser to craft ways of redirecting incentives so that the behavior that society prefers is the behavior that finance professionals engage in.

Ah but the difficulty, of course, is figuring out what such incentives should be. This is really the same as “programming” the system. In this case the programmers are in a position where they have to explain the program to people who don’t understand the system, and they have to explain why it is going to fix the problem. What I mean is members of congress (in conjunction with federal agencies who do have some expertise) have to create legislation and convince enough other members of congress that it is good legislation so that it will pass. And the way they do that is the same way they do anything else: their rhetorical responsibilty is more to persuade common people that this is a good bill than it is to actually create a good bill. Thus, the regulation of the finance industry can’t be more complicated than what common people understand. Or to adapt Stephen Wolfram’s words, we use regulations that are specially chosen for being simple enough for us to be able to see that it will achieve the purposes we want.

The critical thought there which I hope comes through is “to be able to see that” it will achieve the purposes we want. That means, it doesn’t matter what the results will actually be; it only matters that we imagine that the rules will lead to the results we want.

I’d love to get bogged down in whether democracy as a system is set up the best way, because it too is subject to the criticisms the two above quotes suggest. But, you have to choose your battles. Keeping to financial regulation, perhaps we have a test case here: how on earth do you persuade people to adopt a set of regulations which they do not see will have the effect that they indeed will have? (step one would be experts figuring out in the first place what those should be… My vote is to figure out how to augment natural incentives.)

I’ll leave it to another post, how that could be done.

Written by NAO

July 8, 2009 at 12:01 am

Efficient Market Hypothesis – plus expectations

leave a comment »

Lately there has been a lot of criticism of the “efficient market hypothesis.”  The basic idea (attributed to Eugene Fama) was that the price of a traded asset contains all the information about its value, and because of that, the price will change instantly to reflect new information.  Extended a bit, prices can’t be off by much, and if so only for a short time.  The criticism is that it is now obvious that the market was way off; there were huge asset price bubbles in the housing and equities markets.  Therefore the market isn’t “efficient” the way Fama described.

In addition to the defense that the hypothesis does indeed hold up, because obviously the market bubbles were corrected, I have another argument–I think emotional information, such as expectations, also bear on the market price of an asset.

The free market price of a good contains all the information about the value of the good, even emotional information. But overall, that information is a big mix: there are material costs, manufacturing costs, distribution costs, marketing costs; there are the prices of other similar goods, the value of the productive benefit of the good…  But the trick of it is that in every one of those factors there is a guess or a hunch or a wish.

People can say that the market isn’t really rational. But it is. The issue is the same as it is with rationality in people: emotion is a factor to be weighed rationally; intuition is a factor to be weighed rationally.

If you’re an agent in a market, say you’re buying a house, you’ll be behaving rationally if you weigh your judgment of the house’s value according to the above mentioned factors.  But no matter what, there will be emotional information that is factored in.  Most quantifiable aspects of the house stay the same:  the square footage; the number of bathrooms; the physical condition.  What changes are less quantifiable aspects like how the neighborhood has changed and how you feel about the house.  Perhaps most importantly, your expectation of the future value of the house can go up and down in unquantifiable ways.

If you wanted to buy a house in 2007, you had to pay the going rate.  You couldn’t go around saying “these prices are inflated, I will buy a house for less money.”  If that’s what you thought, you’d end up without a house. If you wanted a house, to be rational you would have to pay the inflated price.

At the time, the price of a house contained the aggregated expectation of how houses’ values will appreciate.  The expectations were wrong, but it’s really the same as if everyone had been wrong about materials costs, for example thinking that lumber cost twice as much as it really did.

Thus, all market transactions are subject to having wrong–but not irrational–expectations.  If the efficient market hypothesis were modified to include emotional information, then what currently appears as a bubble would no longer seem so.  Prices would have been high because people expected them to rise higher.  Furthermore, I believe the timing of the decline in asset values in 2008 and 2009 is much more concurrent with popular expectations (and fears) than with changes in the intrinsic value of companies.  Bear Stearns and Lehman Brothers were overleveraged and running unsustainable businesses already, when asset prices were still rising; homebuyers were overreaching their abilities to pay mortgages already, while home prices were still rising.  It wasn’t until after Lehman that the crisis seemed to reach beyond the Finance industry and into the whole economy; and it was about two weeks after their bankruptcy that the stock market stopped declining slowly and started falling precipitously.

Written by NAO

July 2, 2009 at 3:29 am

Posted in Economics

Follow

Get every new post delivered to your Inbox.