Tuesday, June 17, 2008

How To Monitor The Economy

This column is not about theory. It is about the practical methods of observing the economy. Obviously, the more you understand correct theory, such as business cycle theory, the more meaningful will be the data you observe.

Data about the economy can be segmented into the three separate categories, market price data, non-price raw data and processed data.

Each of these three categories offers different insights about the economy,some more valuable than others.

The key to understanding an economy is to monitor as much data as possible, not in the sense of creating "economic models" that spit out forecasts that fail to account for the complex ever changing nature of the economy, but by looking at the minutiae of economic data so that you can begin to understand what is really going on.

We have significant disagreements with the theoretical economic beliefs of Alan Greenspan, but he gets it right when he studies the minutiae.

The most valuable information about the economy comes from actual market price data. Market price data is unique in that it is not "assembled" data, but is the data of actual pricing going on in the markets. Watching actual prices results in receiving pure data untouched by human hands.

By watching actual prices, one can often notice trends in the economy that are not broadly acknowledged. For example, the price of the dollar vis a vis other currencies is for all practical purposes in an early stage free fall. Over the last five years the dollar is down against most currencies by greater than 30%. There is scant media coverage of this free fall, but by watching actual prices, this trend becomes obvious. Likewise, watching actual prices will clue the individual watching the economy to, for example, the current rising costs of many goods including wheat, gold and oil.

Watching interest rates can also clue the observer in, as to when the Fed is in a easing mode. If short-term interest rates are lower than long-term rates, then the Fed is in an accommodative mode because banks can profitably borrow short-term and lend long-term.

The more prices you watch, the more you will understand about the economy.

After market prices, the best source for information about the economy is raw data. In the raw data camp, we put such items as auto sales, newspaper advertising, money supply numbers etc. This information, often on an industry by industry basis, or better yet when it is obtained on a company by company basis, generally is private sector data.

If one watches the data minutiae in news releases by various corporations,one will began to understand what is occurring in the economy: if sales of autos are up or down, if layoff announcements are spreading across a broad spectrum of industries, or are limited to a specific sector such as the subprime industry, if money raised in the securities markets is increasing or decreasing.

The more raw data you watch, the more you will understand about the economy.

But, because this data is assembled data, there is the possibility of human error or distortion. A particular industry group may have incentive to assemble data in a way that benefits its agenda. Thus this data is not as pure as market price data and the more "assembled" it is, the more one has to be careful for distortions.

Government data and industry association data both have the potential to be finessed. Specific company data may sometimes be finessed but because there are so many separate data points when looking at a multitude of individual company pieces of data, an outlier piece of data will be more quickly spotted. With association and government data, the data is already assembled so even that check of looking at individual pieces of data is gone.

Money supply data assembled by the Federal Reserve appears to be data that is fairly accurate, but the further one travels from the United States the more one has to be careful of even central bank data.

In his book, The Age of Turbulence, Greenspan relates this story about the
Fed's dealing with the Asian currency crisis of 1997:

Korea's central bank was sitting on $25 billion in dollar reserves--ample protection against the Asian contagion, or so we thought.

What we didn't know, but soon discovered, was that the government had played games with those reserves. It had quietly sold or lent most of the dollars to South Korean commercial banks, which in turn had used them to shore up bad loans. So when Charlie Siegman, one of our top international economists, phoned a Korean central banker on Thanksgving weekend and asked, "Why don't you release more reserves?" the banker answered, "We don't have any." What they'd published as reserves has already been
spoken for.


The third category of data is processed data, this is data that is not only assembled, but assumptions are added because of lack of raw data. Most often this data comes from government and includes such data as price indexes, unemployment figures, gross domestic product numbers and productivity numbers. In addition to being extremely dangerous data because of assumptions that are made to create the data, there is also, for some of the data, political pressure to finesse the data. Of all the data, we find the processed data to be the least useful data in getting a true grip on the economy. Unfortunately, it is some of the most widely followed data,
which thus causes it to be important data in the sense that markets will react to it on a short-term basis--even though it is probably the least reliable information about the economy.

In most situations, this data is skewed to a positive reading on the economy, but during a period of the 1990's productivity numbers were actually skewed dramatically negative. This probably happened because not as many eyes are focused on productivity numbers as they are on, say,inflation numbers or employment numbers. No one put pressure on the productivity people to make them more positive--or, for that matter, to notice how out of whack they were with true productivity gains in the
economy.

Greenspan, however, spotted the inaccuracies in the numbers. He caught this because of his detailed studies of economic numbers. He tells the story in his book:

The data we were getting from the Commerce and Labor departments showed that productivity (measured as output per hour worked) was virtually flat in spite of the long-running trend toward computerization. Icould not imagine how that could be. Year in and year out, business had been pouring vast amounts of money into desktop computers, servers,networks. software, and other high-tech gear...This became evident as early as 1993 when new orders for high-tech capital began to accelerate after a protracted period of sluggish growth. The surge continued into 1994suggesting that the early profit experience with the new equipment had been positive.

There were other, even more persuasive indications that the official productivity numbers were awry. Most companies were reporting rising profit margins. Yet few had raised prices. That meant their costs per unit of output were contained or even falling. Most consolidated costs (that is, for business considered as a whole) are labor costs. So if labor costs per unit of output were flat or declining, and the rate of growth of average hourly compensation was rising, it was an arithmetical certainty, that if these data were accurate, the growth of output per hour must be on the rise; productivity was truly accelerating.


Bottom line, the statistics gatherers and econometricians at the Commerce and Labor departments missed the productivity gains of the personal computer!

Recent examples of problems with processed government data include consumer price index data and unemployment data.

Bloomberg's John F. Wasik had this to say about CPI numbers:

The U.S. consumer price index continues to be a testament to the art of economic spin.

Since wages, Social Security cost-of-living increases and some agency budgets are tied to it, the government has a vested interest in keeping it as low as possible.

Yet your real cost of living -- what you keep after taxes, medical bills, college expenses and other household costs -- is probably much higher than the 2 percent annual rate the government reported in July, showing a slight decline.... The single-largest expense for most Americans is housing, accounting for as much as a third of household outlays. Yet the Labor Department's Bureau of Labor Statistics only tracks ``owner's equivalent rent,'' or what a home would yield if it was rented out. Rental units and homes are two very different animals, though, and the government casts a
blind eye to total home ownership expenses.


Most recently, CPI numbers have been declining, because of lower energy costs over the past three months. Anyone watching real price data knows this won't last given that oil is trading at all-time highs! But the CPI distortions continue.

As far as unemployment numbers, on September 7 we commented on the much weaker than expected payroll numbers released at the time:

An odd outlier was employment in local government education which fell by 32,000 in August, as seasonal hiring was less than usual. This unusual education jobs number accounted for approximately 30% of the difference between consensus forecasts and the actual payroll number. This would be the first time in history that a recession was led by summer school teachers!

Almost a month later, with the masses looking for the next distorted piece of government data, WSJ comments on the outlier we discussed on the day of the release:

Many economists suspect the drop in August payrolls was exaggerated by a fluky fall in local government payrolls, and new data from the Bureau of Labor Statistics supports that.

On Tuesday the BLS released state payroll data for August. If you sum up the changes across the 50 states and the District of Columbia, the total rose 159,000, compared to the decline of 4,000 in the national data.

That doesn’t mean the national tally is wrong; sum-of-the-states data are hampered by differing response rates by states and the fact the BLS seasonally adjusts each state separately, notes Ray Stone of Stone&
McCarthy Research Associates.

That said, he says the difference is unusually large. In part that may simply be catchup, since the sum of the states total has lagged the national total for some time. But he says the principal source of divergence in August appears to be in government payrolls. The national tally of government jobs fell 28,000, while the sum-of-the-states tally rose 88,000.

Mr. Stone takes this as evidence that the national data have been distorted by quirky seasonal adjustment, which is made difficult by “the timing issues
surrounding academic years, and the inconsistency in how teachers are paid over the summer. Some get paid on a 12-month basis, others on a 9- or 10- month basis. Any shift from year to year in the relative incidence of “months-paid” will play havoc with the seasonally adjusted teacher payrolls.”


In short, prices are prices, they are the most accurate data out there. Raw data is the second most valuable data, the closer to the source the better. Government processed data not only suffers from political pressures but also faces the difficulty of being assembled data where assumptions are made about all sorts of things, from the quality of product from period to period, to estimates on data not yet received, to seasonal adjustment factors which can greatly impact final data.

The fact that the media, investors and traders focus on the processed data provides an edge to any trader or businessman willing to do the work and look at real prices and raw data. The real prices, raw data minutiae observations, we'll call it the Greenspan Approach, is conducted in depth by very few. There are literally millions of prices and raw data out there that no one is looking at from an analytical perspective. There is room for industry specific analytical economists to do this work and also for economists at the national and international macro-economic level to do so. All that needs to be done is for economists to throw away their near useless processing equations, take a look at the economy from a non-processed framework, and use the Greenspan Approach of just looking at and studying the
minutiae.

In The Age of Turbulence, Greenspan explains the philosophy of the
approach this way:

I have always argued that an up-to-date set of the most detailed estimates for the latest available quarter is far more useful for forecasting accuracy than a more sophisticated model structure.

The fact that this, Greenspan Approach, is not being used, and that most economists tend to build forecasting model structures which suck the life out of data, indicates that true analytical work on the economy is still in its infancy.

No comments:

Post a Comment