I’m not sure this is a good idea:
If the Federal Reserve were to combine the use of historical data with real-time feeds from a number of additional data sources, I believe there would be fewer policy blunders and a higher probability of achieving the Fed’s statutory goals of maximum employment and stable prices.
Second, instead of the Federal Reserve meeting eight times per year to decide whether or not the federal funds rate needs to be adjusted, the rate should continuously adjust in real time through a closed loop mechanism.
The third and final principle would be to implement tiny interest rate adjustments.
Instead of the typical 25 basis point move, the model should move in increments as small as 1/100th of a point. The new Fed chairman wants to provide more clarity–what better way than to let people see the rate and potential rate change every day based on incoming data? This computer-based model would officially welcome the Federal Reserve to the real-time information age.
Vivek Randivé, CEO of Tibco Software, reckons the Federal Reserve should respond as nimbly as the best companies. I don’t have special insight into the tools the Fed does have, but I strongly suspect Randivé is off base in suggesting the Fed doesn’t have access to plenty of real time tools and data. My stronger objection, however, is his belief that real time response would provide better “management” of the economy.
Has the Fed done such a bad job? Its record compares pretty favorably to any corporate steersmen, I suspect. And would decision making improve with micro-adjustments and decisions around the clock? Almost certainly not. There’s a real value in time and reflection, not least on the scale of changing foundational interest rates for the world’s biggest economy. If Ben Bernanke and his colleagues have a week or two to reflect on data before they sit down at the Federal Open Market Committee meetings, more power to them.