Sunday, March 4, 2012

Political Economy of Monarchy and Democracy. by Hans-Hermann Hoppe

I. The Comparative Economics of Private and Public Government Ownership
A government is a territorial monopolist of compulsion — an agency which may engage in continual, institutionalized property rights violations and the exploitation — in the form of expropriation, taxation and regulation — of private property owners. Assuming no more than self-interest on the part of government agents, all governments must be expected to make use of this monopoly and thus exhibit a tendency toward increased exploitation.[1]
However, not every form of government can be expected to be equally successful in this endeavor or to go about it in the same way. Rather, in light of elementary economic theory, the conduct of government and the effects of government policy on civil society can be expected to be systematically different, depending on whether the government apparatus is owned privately or publicly.[2]
 
The defining characteristic of private government ownership is that the expropriated resources and the monopoly privilege of future expropriation are individually owned. The appropriated resources are added to the ruler's private estate and treated as if they were a part of it, and the monopoly privilege of future expropriation is attached as a title to this estate and leads to an instant increase in its present value ("capitalization" of monopoly profit).
Most importantly, as private owner of the government estate, the ruler is entitled to pass his possessions on to his personal heir; he may sell, rent, or give away part or all of his privileged estate and privately pocket the receipts from the sale or rental; and he may personally employ or dismiss every administrator and employee of his estate.
In contrast, in a publicly owned government the control over the government apparatus lies in the hands of a trustee, or caretaker. The caretaker may use the apparatus to his personal advantage, but he does not own it. He cannot sell government resources and privately pocket the receipts, nor can he pass government possessions on to his personal heir. He owns the current use of government resources, but not their capital value.
Moreover, while entrance into the position of a private owner of government is restricted by the owner's personal discretion, entrance into the position of a caretaker-ruler is open. Anyone, in principle, can become the government's caretaker.
From these assumptions two central, interrelated predictions can be deduced:
  1. A private government owner will tend to have a systematically longer planning horizon, i.e., his degree of time preference will be lower, and accordingly, his degree of economic exploitation will tend to be less than that of a government caretaker; and
  2. Subject to a higher degree of exploitation, the nongovernmental public will also be comparatively more present-oriented under a system of publicly owned government than under a regime of private government ownership.
(1) Government Owners' Time Preferences
A private government owner will predictably try to maximize his total wealth, i.e., the present value of his estate and his current income. He will not want to increase his current income at the expense of a more-than-proportional drop in the present value of his assets, and because acts of current-income acquisition invariably have repercussions on present asset values (reflecting the value of all future — expected — asset earnings discounted by the rate of time preference), private ownership in and of itself leads to economic calculation and thus promotes farsightedness.
In the case of the private ownership of government, this implies a distinct moderation with respect to the ruler's incentive to exploit his monopoly privilege of expropriation, for acts of expropriation are by their nature parasitic upon prior acts of production on the part of the nongovernmental public. Where nothing has first been produced, nothing can be expropriated; and where everything is expropriated, all future production will come to a shrieking halt.
Accordingly, a private government owner will want to avoid exploiting his subjects so heavily, for instance, as to reduce his future earnings potential to such an extent that the present value of his estate actually falls. Instead, in order to preserve or possibly even enhance the value of his personal property, he will systematically restrain himself in his exploitation policies. For the lower the degree of exploitation, the more productive the subject population will be; and the more productive the population, the higher will be the value of the ruler's parasitic monopoly of expropriation.
He will use his monopolistic privilege, of course. He will not exploit. But as the government's private owner, it is in his interest to draw parasitically on a growing, increasingly productive and prosperous nongovernment economy as this would effortlessly also increase his own wealth and prosperity — and the degree of exploitation thus would tend to be low.
Moreover, private ownership of government implies moderation and farsightedness for yet another reason. All private property is by definition exclusive property. He who owns property is entitled to exclude everyone else from its use and enjoyment; and he is at liberty to choose with whom, if anyone, he is willing to share in its usage. Typically, he will include his family and exclude all others, except as invited guests or as paid employees or contractors.
Only the ruling family — and to a minor extent its friends, employees and business partners — share in the enjoyment of the expropriated resources and can thus lead a parasitic life. Because of these restrictions regarding entrance into government and the exclusive status of the individual ruler and his family, private government ownership stimulates the development of a clear "class-consciousness" on the part of the nongovernmental public and promotes opposition and resistance to any expansion of the government's exploitative power.
A clear-cut distinction between the (few) rulers on the one hand and the (many) ruled on the other exists, and there is little risk or hope of anyone of either class ever falling or rising from one class to the other. Confronted with an almost insurmountable barrier in the way of upward mobility, the solidarity among the ruled — their mutual identification as actual or potential victims of governmental property-rights violations — is strengthened, and the risk to the ruling class of losing its legitimacy as the result of increased exploitation is heightened.[3]
In distinct contrast, the caretaker of a publicly owned government will try to maximize not total government wealth (capital values and current income), but current income (regardless, and at the expense, of capital values). Indeed, even if the caretaker wishes to act differently, he cannot. Because as public property government resources are not for sale, and without market prices economic calculation is impossible. Accordingly, it has to be regarded as unavoidable that public government ownership will result in continual capital consumption.
Instead of maintaining or even enhancing the value of the government estate, as a private owner would tend to do, a government's temporary caretaker will quickly use up as much of the government resources as possible, for what he does not consume now, he may never be able to consume.
In particular, a caretaker — as distinct from a government's private owner — has no interest in not ruining his country. For why should he not want to increase his exploitation if the advantage of a policy of moderation — the resulting higher capital value of the government estate — cannot be reaped privately, while the advantage of the opposite policy of increased exploitation — a higher current income — can be so reaped? To a caretaker, unlike to a private owner, moderation has only disadvantages and no advantages.[4]
In addition, with a publicly owned government, anyone in principle can become a member of the ruling class or even the supreme power. The distinction between the rulers and the ruled as well as the class consciousness of the ruled become blurred. The illusion even arises that the distinction no longer exists: that with a public government no one is ruled by anyone, but everyone instead rules himself. Accordingly, public resistance against government power is systematically weakened.
While exploitation and expropriation before might have appeared plainly oppressive and evil to the public, they seem much less so, mankind being what it is, once anyone may freely enter the ranks of those who are at the receiving end. Consequently, exploitation will increase, whether openly in the form of higher taxes or discretely as increased governmental money "creation" (inflation) or legislative regulation. Likewise, the number of government employees ("public servants") will rise absolutely as well as relatively to private employment, in particular attracting and promoting individuals with high degrees of time preference, and low and limited farsightedness.
(2) Subjects' Time Preferences
In contrast to the right to self-defense in the event of a criminal attack, the victim of government violations of private-property rights may not legitimately defend himself against such violations.[5]
The imposition of a government tax on property or income violates a property owner's and income producer's rights as much as theft does. In both cases, the owner-producer's supply of goods is diminished against his will and without his consent. Government money or "liquidity" creation involves no less a fraudulent expropriation of private-property owners than the operations of a criminal counterfeiting gang.
As well, any government regulation as to what an owner may or may not do with his property — beyond the rule that no one may physically damage the property of others and that all exchange and trade be voluntary and contractual — implies a "taking" of somebody's property, on a par with acts of extortion, robbery, or destruction. But taxation, the government's provision for liquidity, and government regulations, unlike their criminal equivalents, are considered legitimate, and the victim of government interference, unlike the victim of a crime, is not entitled to physically defend and protect his property.
Owing to their legitimacy, then, government violations of property rights affect individual time preferences in a systematically different and much more profound way than crime. Like crime, all government interference with private property rights reduces someone's supply of present goods and thus raises his effective time-preference rate. However, government offenses — unlike crime — simultaneously raise the time preference degree of actual and potential victims because they also imply a reduction in the supply of future goods (a reduced rate of return on investment).
Crime, because it is illegitimate, occurs only intermittently — the robber disappears from the scene with his loot and leaves his victim alone. Thus, crime can be dealt with by increasing one's demand for protective goods and services so as to restore or even increase one's future rate of investment return and make it less likely that the same or a different robber will succeed a second time.
In contrast, because they are legitimate, governmental property rights violations are continual. The offender does not disappear into hiding but stays around, and the victim does not "arm" himself but must (at least he is generally expected to) remain defenseless. The actual and potential victims of government property-rights violations respond by associating a permanently higher risk with all future production, and systematically adjusting their expectations concerning the rate of return on all future investment downward.
By simultaneously reducing the supply of present and expected future goods, then, governmental property-rights violations not only raise time preference rates (with given schedules) but also time-preference schedules. Because owner-producers are — and see themselves as — defenseless against future victimization by government agents, their expected rate of return on productive, future-oriented actions is reduced all-around, and accordingly, all actual and potential victims become more present-oriented.[6]
Moreover, because the degree of exploitation is comparatively higher under a publicly owned government, this tendency toward present-orientation will be significantly more pronounced if the government is publicly owned than if it is owned privately.[7]
II. Application: The Transition from Monarchy to Democracy (1789—1918)
Hereditary monarchies represent the historical example of privately owned governments, and democratic republics that of publicly owned governments.
For most of its history, mankind, insofar as it was subject to any government control at all, was under monarchical rule. There were exceptions: Athenian democracy, Rome during its republican era until 31 BC, the republics of Venice, Florence and Genoa during the renaissance period, the Swiss cantons since 1291, the United Provinces from 1648 until 1673, and England under Cromwell from 1649 until 1660. Yet these were rare occurrences in a world dominated by monarchies. With the exception of Switzerland, they were short-lived phenomena.
Constrained by monarchical surroundings, all older republics satisfied the open-entry condition of public property only imperfectly, for while a republican form of government implies by definition that the government is not privately but publicly owned, and a republic can thus be expected to possess an inherent tendency toward the adoption of universal suffrage, in all of the earlier republics, entry into government was limited to relatively small groups of "nobles."
With the end of World War I, mankind truly left the monarchical age.[8] In the course of the one-and-a-half centuries since the French Revolution, Europe, and in its wake the entire world, have undergone a fundamental transformation. Everywhere, monarchical rule and sovereign kings were replaced by democratic-republican rule and sovereign "peoples."
The first assault of republicanism and the idea of popular sovereignty on the dominating monarchical principle was repelled with the military defeat of Napoleon and the restoration of Bourbon rule in France. As a result of the revolutionary terror and the Napoleonic wars, republicanism was widely discredited for much of the 19th century.
However, the democratic-republican spirit of the French revolution left a permanent imprint. From the restoration of the monarchical order in 1815 until the outbreak of WWI in 1914, all across Europe popular political participation and representation was systematically expanded. The franchise was successively widened and the powers of popularly elected parliaments increased everywhere.[9]
From 1815 to 1830, the right to vote in France was still severely restricted under the restored Bourbons. Out of a population of some 30 million, the electorate included only France's very largest property owners — about 100,000 people (less than 0.5 percent of the population above the age of 20). As a result of the July Revolution of 1830, the abdication of Charles X and the ascension to the throne of the Duke of Orleans, Louis Philippe, the number of voters increased to about 200,000. As a result of the revolutionary upheavals of 1848, France again turned republican, and a universal and unrestricted suffrage for all male citizens above the age of 21 was introduced. Napoleon III was elected by nearly 5.5 million votes out of an electorate of more than 8 million.
In the United Kingdom after 1815, the electorate consisted of some 500,000 well-to-do property owners (about 4 percent of the population above age 20). The Reform Bill of 1832 lowered the property owner requirements and extended the franchise to about 800,000. The next extension, from about 1 million to 2 million, came with the Second Reform Bill of 1867. In 1884 property restrictions were relaxed even further, and the electorate increased to about 6 million (almost a third of the population above age 20 and more than three-fourths of all male adults).
In Prussia, as the most important of the 39 independent German states recognized after the Vienna Congress, democratization set in with the revolution of 1848 and the constitution of 1850. The lower chamber of the Prussian parliament was hence elected by universal male suffrage.
However, until 1918 the electorate remained stratified into three estates with different voting powers. For example, the wealthiest people — those who contributed a third of all taxes — elected a third of the members of the lower house.
In 1867, the North German Confederation, including Prussia and 21 other German states, was founded. Its constitution provided for universal, unrestricted suffrage for all males above the age of 25. In 1871, after the victory over Napoleon III, the constitution of the North German Confederation was essentially assumed by the newly founded German Empire. Out of a total population of around 35 million, nearly 8 million people (or about a third of the population above 20) elected the first German Reichstag.
After Italy's political unification under the leadership of the Kingdom of Sardinia and Piedmont in 1861, initially the vote was only given to about 500,000 people out of a population of some 25 million (about 3.5 percent of the population above age 20). In 1882, the property requirements were relaxed, and the minimum age was lowered from 25 to 21 years. As a result, the Italian electorate increased to more than 2 million. In 1913, an almost universal and unrestricted suffrage for all males above 30 and minimally restricted suffrage for males above 21 was introduced, raising the number of Italian voters to more than 8 million (more than 40 percent of the population above 20).
In Austria, restricted and unequal male suffrage was introduced in 1873. The electorate, composed of four classes or curia of unequal voting powers, totaled 1.2 million voters out of a population of about 20 million (10 percent of the population above 20). In 1867 a fifth curia was added. And forty years later the curia system was abolished, and universal and equal suffrage for males above age 24 was adopted, bringing the number of voters close to 6 million (almost 40 percent of the population above 20).
Russia had elected provincial and district councils — zemstvos — since 1864; and in 1905, as a fallout of its lost war against Japan, it created a parliament — the Duma — which was elected by a near universal, although indirect and unequal, male suffrage. As for Europe's minor powers, universal or almost universal and equal male suffrage has existed in Switzerland since 1848, and was adopted between 1890 and 1910 in Belgium, the Netherlands, Norway, Sweden, Spain, Greece, Bulgaria, Serbia, and Turkey.
Although increasingly emasculated, the monarchical principle remained dominant until the cataclysmic events of WWI. Before 1914, only two republics existed in Europe — France and Switzerland. And of all major European monarchies, only the United Kingdom could be classified as a parliamentary system; that is, one where the supreme power was vested in an elected parliament.
Only four years later, after the United States — where the democratic principle implied in the idea of a republic had only recently been carried to victory as a result of the destruction of the secessionist Confederacy by the centralist Union government[10] — had entered the European war and decisively determined its outcome, monarchies had all but disappeared, and Europe turned to democratic republicanism.[11]
In Europe, the defeated Romanovs, Hohenzollerns, and Habsburgs had to abdicate or resign, and Russia, Germany, and Austria became democratic republics with universal — male and female — suffrage and parliamentary governments. Likewise, all of the newly created successor states — Poland, Finland, Estonia, Latvia, Lithuania, Hungary, and Czechoslovakia (with the sole exception of Yugoslavia) — adopted democratic-republican constitutions.
In Turkey and Greece, the monarchies were overthrown. Even where monarchies remained nominally in existence, as in Great Britain, Italy, Spain, Belgium, the Netherlands, and the Scandinavian countries, monarchs no longer exercised any governing power. Universal adult suffrage was introduced, and all government power was invested in parliaments and "public" officials.[12] A new world order — the democratic-republican age, under the aegis of a dominating US government — had begun.
III. Evidence and Illustrations: Exploitation and Present-Orientedness under Monarchy and Democratic Republicanism
From the viewpoint of economic theory, the end of WWI can be identified as the point in time at which private government ownership was completely replaced by public government ownership, and whence a systematic tendency toward increased exploitation — government growth — and rising degrees of social time preference — present-orientedness — can be expected to take off. Indeed, this has been the grand, underlying theme of post-WWI Western history: from 1918 onward practically all indicators
  1. of governmental exploitation and
  2. of rising time preferences have exhibited a systematic upward tendency.
III.1. Indicators of Exploitation
There is no doubt that the amount of taxes imposed on civil society increased during the monarchical age.[13] However, throughout the entire period, the share of government revenue remained remarkably stable and low. Economic historian Carlo M. Cipolla concludes,
All in all, one must admit that the portion of income drawn by the public sector most certainly increased from the eleventh century onward all over Europe, but it is difficult to imagine that, apart from particular times and places, the public power ever managed to draw more than 5 to 8 percent of national income.
And he then goes on to note that this portion was not systematically exceeded until the second half of the 19th century.[14] Until then, of all Western European countries only the United Kingdom had an income tax (from 1843 on). France first introduced some form of income tax in 1873, Italy in 1877, Norway in 1892, the Netherlands in 1894, Austria in 1898, Sweden in 1903, the United States in 1913, Switzerland in 1916, Denmark and Finland in 1917, Ireland and Belgium in 1922, and Germany in 1924.[15] Yet even at the time of the outbreak of WWI, total government expenditure as a percentage of Gross Domestic Product (GDP) typically had not risen above 10 percent and only rarely, as in the case of Germany, exceeded 15 percent. In striking contrast, with the onset of the democratic-republican age, total government expenditure as a percentage of GDP typically increased to 20 to 30 percent in the course of the 1920s and 1930s, and by the mid-1970s had generally reached 50 percent.[16]
There is also no doubt that total government employment increased during the monarchical age. But until the very end of the 19th century, government employment rarely exceeded 3 percent of the total labor force. In contrast, by the mid-1970s government employment as a percentage of the total labor force had typically grown to above 15 percent.[17]
The same pattern emerges from an inspection of inflation and the money supply. The monarchical world was generally characterized by the existence of a commodity money — typically silver or gold. A commodity money standard makes it difficult, if not impossible, for a government to inflate the money supply.
There had been attempts to introduce an irredeemable fiat currency. But these fiat-money experiments, associated in particular with the Bank of Amsterdam, the Bank of England, and John Law and the Banque Royale of France, had been regional curiosities which ended quickly in financial disasters, such as the collapse of the Dutch "Tulip Mania" in 1637, and the "Mississippi Bubble" and the "South Sea Bubble" in 1720. As hard as they tried, monarchical rulers did not succeed in establishing monopolies of pure fiat currencies, i.e., of irredeemable government paper monies, which can be created virtually out of thin air, at practically no cost.
It was only under conditions of all-around democratic republicanism, after 1918, that this feat was accomplished. During WWI, as during earlier wars, belligerent governments went off the gold standard. Unlike earlier wars, however, WWI did not conclude with a return to the gold standard. Instead, from the mid-1920s until 1971, and interrupted by a series of international monetary crises, a pseudo-gold standard — the gold-exchange standard — was implemented. In 1971, the last remnant of the international gold standard was abolished. Since then, and for the first time in history, the entire world has adopted a pure fiat-money system of freely fluctuating government paper currencies.[18]
As a result, a seemingly permanent secular tendency toward inflation and currency depreciation has come into existence.
During the monarchical age, with a commodity money largely outside of government control, the "level" of prices had generally fallen and the purchasing power of money increased, except during times of war or new gold discoveries. Various price indices for Britain, for instance, indicate that prices were substantially lower in 1760 than they had been a hundred years earlier; and in 1860 they were lower than they had been in 1760.[19] Connected by an international gold standard, the development in other countries was similar.[20]
In sharp contrast, during the democratic-republican age, with the world financial center shifted from Britain to the United States, a very different pattern emerged. For instance, shortly after WWI, in 1921, the US wholesale-commodity price index stood at 113.[21] After WWII, in 1948, it had risen to 185. In 1971 it was 255, by 1981 it reached 658, and in 1991 it was near 1,000. During only two decades of irredeemable fiat money, the consumer price index in the United States rose from 40 in 1971 to 136 in 1991, in the United Kingdom it climbed from 24 to 157, in France from 30 to 137, and in Germany from 56 to 116.[22]
Similarly, during more than 70 years, from 1845 until the end of WWI in 1918, the British money supply had increased about 6-fold.[23] In distinct contrast, during the 73 years from 1918 until 1991, the US money supply increased more than 64-fold.[24]
In addition to taxation and inflation, a government can resort to debt in order to finance its current expenditures. As with taxation and inflation, there is no doubt that government debt increased in the course of the monarchical age. However, as predicted theoretically, in this field monarchs also showed considerably more moderation and farsightedness than democratic-republican caretakers.
Throughout the monarchical age, government debts were essentially war debts. While the total debt thereby tended to increase over time, during peacetime at least monarchs characteristically reduced their debts. The British example is fairly representative. In the course of the 18th and 19th centuries, government debt increased. It was 76 million pounds after the Spanish War in 1748, 127 million after the Seven Years' War in 1763, 232 million after the American War of Independence in 1783, and 900 million after the Napoleonic Wars in 1815. Yet during each peacetime period — from 1727—1739, from 1748—1756, and from 1762—1775, total debt actually decreased. From 1815 until 1914, the British national debt fell from a total of 900 to below 700 million pounds.
In striking contrast, since the onset of the democratic-republican age British debt only increased, in war and in peace. In 1920 it was 7.9 billion pounds, in 1938, 8.3 billion, in 1945, 22.4 billion, in 1970, 34 billion, and since then it has skyrocketed to more than 190 billion pounds in 1987.[25]
Likewise, US government debt has increased through war and peace. Federal government debt after WWI, in 1919, was about 25 billion dollars. In 1940 it was 43 billion, and after WWII, in 1946, it stood at about 270 billion. By 1970 it had risen to 370 billion, and since 1971, under a pure fiat-money regime, it has exploded. In 1979 it was about 840 billion, and in 1985 more than 1.8 trillion. In 1988 it reached almost 2.5 trillion, and by 1992 it exceeded 3 trillion dollars.[26]
Finally, the same tendency toward increased exploitation and present-orientation emerges upon examination of government legislation and regulation. During the monarchical age, with a clear-cut distinction between the ruler and the ruled, the king and his parliament were held to be under the law. They applied preexisting law as judge or jury.
They did not make law. Writes Bertrand de Jouvenel,
The monarch was looked on only as judge and not as legislator. He made subjective rights respected and respected them himself; he found these rights in being and did not dispute that they were anterior to his authority.… Subjective rights were not held on the precarious tenure of grant but were freehold possessions. The sovereign's right also was a freehold. It was a subjective right as much as the other rights, though of a more elevated dignity, but it could not take the other rights away.[27]
To be sure, the monopolization of law administration led to higher prices and/or lower product quality than those that would have prevailed under competitive conditions, and in the course of time kings employed their monopoly increasingly to their own advantage. But as late as the beginning of the 20th century, A.V. Dicey could still maintain that as for Great Britain, for instance, legislative law — public law — as distinct from preexisting law — private law — did not exist.[28]
In striking contrast, under democracy, with the exercise of power shrouded in anonymity, presidents and parliaments quickly came to rise above the law. They became not only judge but legislator, the creator of "new" law.[29] Today, notes Jouvenel,
we are used to having our rights modified by the sovereign decisions of legislators. A landlord no longer feels surprised at being compelled to keep a tenant; an employer is no less used to having to raise the wages of his employees in virtue of the decrees of Power. Nowadays it is understood that our subjective rights are precarious and at the good pleasure of authority.[30]
In a development similar to the democratization of money — the substitution of government paper money for private commodity money and the resulting inflation and increased financial uncertainty — the democratization of law and law administration has led to a steadily growing flood of legislation. Presently, the number of legislative acts and regulations passed by parliaments in the course of a single year is in the tens of thousands, filling hundreds of thousands of pages, affecting all aspects of civil and commercial life, and resulting in a steady depreciation of all law and heightened legal uncertainty.
As a typical example, the 1994 edition of the Code of Federal Regulations (CFR), the annual compendium of all US Federal Government regulations currently in effect, consists of a total of 201 books, occupying about 26 feet of library shelf space. The Code's index alone is 754 pages. The Code contains regulations concerning the production and distribution of almost everything imaginable: from celery, mushrooms, watermelons, watchbands, the labeling of incandescent light bulbs, hosiery, iron and steel manufacturing, and onion rings made out of diced onions, revealing the almost-totalitarian power of a democratic government.[31]
III.2. Indicators of Present-Orientedness
The phenomenon of social time preference is somewhat more elusive than that of expropriation and exploitation, and it is more complicated to identify suitable indicators of present-orientation. Moreover, some indicators are less direct — "softer" — than those of exploitation. But all of them point in the same direction and together provide as clear an illustration of the second theoretical prediction: that democratic rule also promotes shortsightedness (present-orientation) within civil society.[32]
The most direct indicator of social time preference is the rate of interest. The interest rate is the ratio of the valuation of present goods as compared to future goods. More specifically, it indicates the premium at which present money is traded against future money. A high interest rate implies more "present-orientedness" and a low rate of interest implies more "future-orientation."
Under normal conditions — that is, under the assumption of increasing standards of living and real-money incomes — the interest rate can be expected to fall and ultimately approach, yet never quite reach, zero. With rising real incomes, the marginal utility of present money falls relative to that of future money, and hence under the ceteris paribus assumption of a given time preference schedule, the interest rate must fall. Consequently, savings and investment will increase, future real incomes will be still higher, and so on.
In fact, a tendency toward falling interest rates characterizes mankind's suprasecular trend of development. Minimum interest rates on "normal safe loans" were around 16 percent at the beginning of Greek financial history in the 6th century BC, and fell to 6 percent during the Hellenistic period. In Rome, minimum interest rates fell from more than 8 percent during the earliest period of the Republic to 4 percent during the first century of the Empire. In 13th-century Europe, the lowest interest rates on 'safe' loans were 8 percent. In the 14th century they came down to about 5 percent. In the 15th century they fell to 4 percent. In the 17th century they went down to 3 percent. And at the end of the 19th century, minimum interest rates had further declined to less than 2.5 percent.[33]
This trend was by no means smooth. It was frequently interrupted by periods, sometimes as long as centuries, of rising interest rates. However, such periods were associated with major wars and revolutions.
Furthermore, whereas high or rising minimum interest rates indicate periods of generally low or declining living standards, the overriding opposite tendency toward low and falling interest rates reflects mankind's overall progress — its advance from barbarism to civilization. Specifically, the trend toward lower interest rates reflects the rise of the Western World, its peoples' increasing prosperity, farsightedness, intelligence, and moral strength, and the unparalleled height of 19th-century European civilization.
Before this historical backdrop and in accordance with economic theory, then, it should be expected that 20th-century interest rates would have to be still lower than 19th-century rates. Indeed, only two possible explanations exist why this is not so. The first possibility is that 20th-century real incomes did not exceed, or even fell below, 19th-century incomes. However, this explanation can be ruled out on empirical grounds, for it seems fairly uncontroversial that 20th-century incomes are in fact higher.
Then only the second explanation remains. If real incomes are higher but interest rates are not lower, then the ceteris paribus clause can no longer be assumed true. Rather, the social time preference schedule must have shifted upward. That is, the character of the population must have changed. People on the average must have lost in moral and intellectual strength and have become more present-oriented. Indeed, this appears to be the case.
From 1815 onward, throughout Europe and the Western World, minimum interest rates steadily declined to an historic low of, on the average, well below 3 percent at the turn of the century. With the onset of the democratic-republican age, this earlier tendency came to a halt and seems to have changed direction, revealing 20th-century Europe and the United States as declining civilizations.
An inspection of the lowest decennial average interest rates for Britain, France, the Netherlands, Belgium, Germany, Sweden, Switzerland, and the United States, for instance, shows that during the entire post-WWI era interest rates in Europe were never as low or lower than they had been during the second half of the 19th century. Only in the United States, in the 1950s, did interest rates ever fall below late 19th-century rates. This was only a short-lived phenomenon, and US interest rates even then were not lower than they had been in Britain during the second half of the 19th century.
Instead, 20th-century rates were universally higher than 19th-century rates, and if anything they have exhibited a rising tendency.[34] This conclusion does not substantially change, even when it is taken into account that modern interest rates, in particular since the 1970s, include a systematic inflation premium. After adjusting recent nominal interest rates for inflation in order to yield an estimate of real interest rates, contemporary interest rates still appear to be significantly higher than they were 100 years ago.
On the average, minimum long-term interest rates in Europe and the US nowadays seem to be well above 4 percent and possibly as high as 5 percent — that is, above the interest rates of 17th-century Europe and as high or higher than 15th-century rates. Likewise, current US savings rates of around 5 percent of disposable income are no higher than they were more than 300 years ago in a much poorer 17th-century England.[35]
Parallel to this development and reflecting a more specific aspect of the same underlying phenomenon of high or rising social time preferences, indicators of family disintegration — "dysfunctional families" — have exhibited a systematic increase.
Until the end of the 19th century, the bulk of government spending — typically more than 50 percent — went to financing the military. Assuming government expenditures to be then about 5 percent of the national product, this amounted to military expenditures of 2.5 percent of the national product. The remainder went to government administration.
Welfare spending or "public charity" played almost no role. Insurance was considered to be in the province of individual responsibility, and poverty relief seen as the task of voluntary charity. In contrast, as a reflection of the egalitarianism inherent in democracy, from the beginning of the democratization in the late 19th century onward came the collectivization of individual responsibility.
Military expenditures have typically risen to 5—10 percent of the national product in the course of the 20th century. But with public expenditures currently making up 50 percent of the national product, military expenditures now only represent 10—20 percent of total government spending. The bulk of public spending — typically more than 50 percent of total expenditures (or 25 percent of the national product) — is now eaten up by public-welfare spending.[36]
Consequently, by increasingly relieving individuals of the responsibility of having to provide for their own health, safety, and old age, the range and temporal horizon of private provisionary action have been systematically reduced. In particular, the value of marriage, family, and children have fallen, because they are needed less as soon as one can fall back on "public" assistance.
Thus, since the onset of the democratic-republican age the number of children has declined, and the size of the endogenous population has stagnated or even fallen. For centuries, until the end of the 19th century, the birth rate had been almost constant: somewhere between 30 to 40 per 1,000 population (usually somewhat higher in predominantly Catholic and lower in Protestant countries).
In sharp contrast, in the course of the 20th century all over Europe and the US birthrates have experienced a dramatic decline — down to about 15 to 20 per 1,000.[37] At the same time, the rates of divorce, illegitimacy, single parenting, singledom, and abortion have steadily increased, while personal savings rates have begun to stagnate or even fall rather than rise proportionally with rising incomes.[38]
Moreover, as a consequence of the depreciation of law resulting from legislation and the collectivization of responsibility effected in particular by social security legislation, the rate of crimes of a serious nature, such as murder, assault, robbery, and theft, has also shown a systematic upward tendency.
In the "normal" course of events — that is, with rising standards of living — it can be expected that the protection against social disasters such as crime will undergo continual improvement, just as one would expect the protection against natural disasters such as floods, earthquakes and hurricanes to become progressively better. Indeed, throughout the Western world this appears to have been the case by and large — until recently, during second half of the 20th century, when crime rates began to climb steadily upward.[39]
To be sure, there are a number of factors other than increased irresponsibility and shortsightedness brought on by legislation and welfare that may contribute to crime. Men commit more crimes than women, the young more than the old, blacks more than whites, and city dwellers more than villagers. Accordingly, changes in the composition of the sexes, age groups, races, and the degree of urbanization can be expected to have a systematic effect on crime.
However, all of these factors are relatively stable and thus cannot account for any systematic change in the long-term downward trend of crime rates. As for European countries, their populations were and are comparatively homogeneous; and in the United States, the proportion of blacks has remained roughly stable. The sex composition is largely a biological constant; and as a result of wars, only the proportion of males has periodically fallen, thus actually reinforcing the "normal" trend toward falling crime rates.
Similarly, the composition of age groups has changed only slowly; and due to declining birth rates and higher life expectancies the average age of the population has actually increased, thus helping to depress crime rates still further. Finally, the degree of urbanization began to increase dramatically from about 1800 onward. A period of rising crime rates during the early 19th century can be attributed to this initial spurt of urbanization.[40]
Yet, after a period of adjustment to the new phenomenon of urbanization, from the mid-19th century onward, the countervailing tendency toward falling crime rates took hold again, despite the fact that the process of rapid urbanization continued for about another hundred years. And when crime rates began to move systematically upward, from the mid-20th century onward, the process of increasing urbanization had actually come to a halt.
It thus appears that the phenomenon of rising crime rates cannot be explained other than with reference to the process of democratization: by a rising degree of social time preference, an increasing loss of individual responsibility, intellectually and morally, and a diminished respect for all law — moral relativism — stimulated by an unabated flood of legislation. Of course, "high time preference" is by no means equivalent with "crime." A high time preference can also find expression in such perfectly lawful activities as recklessness, unreliability, poor manners, laziness, stupidity or hedonism.
Nonetheless, a systematic relationship between high time preference and crime exists, for in order to earn a market income a certain minimum of planning, patience and sacrifice is required. One must first work for a while before one gets paid. In contrast, most serious criminal activities such as murder, assault, rape, robbery, theft, and burglary require no such discipline. The reward for the aggressor is immediate and tangible, whereas the sacrifice — possible punishment — lies in the future and is uncertain. Consequently, if the social degree of time preference were increased, it would be expected that the frequency in particular of these forms of aggressive behavior would rise — as they in fact did.[41]
IV. Conclusion: Monarchy, Democracy, and the Idea of a Natural Order
From the vantage point of elementary economic theory and in light of historical evidence, then, a revisionist view of modern history results. The Whig theory of history, according to which mankind marches continually forward toward ever higher levels of progress, is incorrect. From the viewpoint of those who prefer less exploitation over more and who value farsightedness and individual responsibility above shortsightedness and irresponsibility, the historic transition from monarchy to democracy represents not progress but civilizational decline.
Nor does this verdict change if more or other indicators are included. Quite to the contrary. Without question the most important indicator of exploitation and present-orientedness not discussed above is war. Yet if this indicator were included the relative performance of democratic-republican government appears to be even worse, not better. In addition to increased exploitation and social decay, the transition from monarchy to democracy has brought a change from limited warfare to total war, and the 20th century, the age of democracy, must be ranked also among the most murderous periods in all of history.[42]
Thus, inevitably two final questions arise. What can we expect? And what can we do? As for the first question, the answer is brief. At the end of the 20th century, democratic republicanism in the United States and all across the Western world has apparently exhausted the reserve fund that was inherited from the past. For decades, real incomes have stagnated or even fallen.[43] The public debt and the cost of social security systems have brought on the prospect of an imminent economic meltdown.
At the same time, societal breakdown and social conflict have risen to dangerous heights. If the tendency toward increased exploitation and present-orientedness continues on its current path, the Western democratic welfare states will collapse as the East European socialist peoples' republics did in the late 1980s. Hence one is left with only the second question: what can we do in order to prevent the process of civilizational decline from running its full course to an economic and social catastrophe?
First, the idea of democracy and majority rule must be delegitimized. Ultimately, the course of history is determined by ideas, be they true or false. Just as kings could not exercise their rule unless a majority of public opinion accepted such rule as legitimate, so will democratic rulers not last without ideological support in public opinion.[44]
Likewise, the transition from monarchical to democratic rule must be explained as fundamentally nothing but a change in public opinion. In fact, until the end of WWI, the overwhelming majority of the public in Europe accepted monarchical rule as legitimate.[45] Today, hardly anyone would do so.
On the contrary, the idea of monarchical government is considered laughable. Consequently, a return to the "ancien régime" must be regarded as impossible. The legitimacy of monarchical rule appears to have been irretrievably lost. Nor would such a return be a genuine solution. For monarchies, whatever their relative merits, do exploit and do contribute to present-orientedness as well. Rather, the idea of democratic-republican rule must be rendered equally if not more laughable, not in the least by identifying it as the source of the ongoing process of decivilization.
But secondly, and still more importantly, at the same time a positive alternative to monarchy and democracy — the idea of a natural order — must be spelled out and understood. On the one hand, and simply enough, this involves the recognition that it is not exploitation, either monarchical or democratic, but private property, production, and voluntary exchange that are the ultimate source of human civilization.
On the other hand, psychologically more difficult to accept, it involves the recognition of a fundamental sociological insight (which incidentally also helps identify precisely where the historic opposition to monarchy went wrong): that the maintenance and preservation of a private-property-based exchange economy requires as its sociological presupposition the existence of a voluntarily acknowledged "natural" elite — a nobilitas naturalis.[46]
The natural outcome of the voluntary transactions between various private property owners is decidedly nonegalitarian, hierarchical, and elitist. As the result of widely diverse human talents, in every society of any degree of complexity a few individuals quickly acquire the status of an elite. Owing to superior achievements of wealth, wisdom, bravery, or a combination thereof, some individuals come to possess "natural authority," and their opinions and judgments enjoy widespread respect.
Moreover, because of selective mating and marriage and the laws of civil and genetic inheritance, positions of natural authority are more likely than not passed on within a few — noble — families. It is to the heads of these families with long-established records of superior achievement, farsightedness, and exemplary personal conduct, that men turn with their conflicts and complaints against each other, and it is these very leaders of the natural elite who typically act as judges and peacemakers, often free of charge, out of a sense of obligation required and expected of a person of authority or even out of a principled concern for civil justice, as a privately produced "public good."[47]
In fact, the endogenous origin of a monarchy (as opposed to its exogenous origin via conquest)[48] cannot be understood except before the background of a prior order of natural elites. The small but decisive step in the transition to monarchical rule — original sin — consisted precisely in the monopolization of the function of judge and peacemaker. The step was taken, once a single member of the voluntarily acknowledged natural elite — the king — could insist, against the opposition of other members of the social elite, that all conflicts within a specified territory be brought before him.
From this moment on, law and law enforcement became more expensive: instead of being offered free of charge or for a voluntary payment, they were financed with the help of a compulsory tax. At the same time, the quality of law deteriorated: instead of upholding the preexisting law and applying universal and immutable principles of justice, a monopolistic judge, who did not have to fear losing clients as a result of being less than impartial in his judgments, could successively alter and pervert the existing law to his own advantage.
It was to a large extent the inflated price of justice and the perversions of ancient law by the kings which motivated the historical opposition against monarchy. However, confusion as to the causes of this phenomenon prevailed. There were those who recognized correctly that the problem lay with monopoly, not with elites or nobility.[49] But they were far outnumbered by those who erroneously blamed it on the elitist character of the ruler instead, and who accordingly advocated to maintain the monopoly of law and law enforcement and merely replace the king and the visible royal pomp by the "people" and the presumed modesty and decency of the "common man." Hence the historic success of democracy.
Ironically, the monarchy was then destroyed by the same social forces that kings had first stimulated when they began to exclude competing natural authorities from acting as judges. In order to overcome their resistance, kings typically aligned themselves with the people, the common man.[50]
Appealing to the always popular sentiment of envy, kings promised the people cheaper and better justice in exchange and at the expense of taxing — cutting down to size — their own betters (that is, the kings' competitors). When the kings' promises turned out to be empty, as was to be predicted, the same egalitarian sentiments which they had previously courted now focused and turned against them.
After all, the king himself was a member of the nobility, and as a result of the exclusion of all other judges, his position had become only more elevated and elitist and his conduct only more arrogant. Accordingly, it appeared only logical then that kings, too, should be brought down and that the egalitarian policies, which monarchs had initiated, be carried through to their ultimate conclusion: the monopolistic control of the judiciary by the common man.
Predictably, as explained and illustrated in detail above, the democratization of law and law enforcement — the substitution of the people for the king — made matters only worse, however. The price of justice and peace has risen astronomically, and all the while the quality of law has steadily deteriorated to the point where the idea of law as a body of universal and immutable principles of justice has almost disappeared from public opinion and has been replaced by the idea of law as legislation (government-made law).
At the same time, democracy has succeeded where monarchy only made a modest beginning: in the ultimate destruction of the natural elites. The fortunes of great families have dissipated, and their tradition of a culture of economic independence, intellectual farsightedness, and moral and spiritual leadership has been lost and forgotten. Rich men still exist today, but more frequently than not they owe their fortune now directly or indirectly to the state.
Hence, they are often more dependent on the state's continued favors than people of far lesser wealth. They are typically no longer the heads of long-established leading families but "nouveaux riches." Their conduct is not marked by special virtue, dignity, or taste but is a reflection of the same proletarian mass-culture of present-orientedness, opportunism, and hedonism that the rich now share with everyone else; and consequently, their opinions carry no more weight in public opinion than anyone else's.
Hence, when democratic rule has finally exhausted its legitimacy, the problem faced will be significantly more difficult than when kings lost their legitimacy. Then, it would have been sufficient by and large to abolish the king's monopoly of law and law enforcement and replace it with a natural order of competing jurisdictions, because remnants of natural elites who could have taken on this task still existed.
Now, this will no longer be sufficient. If the monopoly of law and law enforcement of democratic governments is dissolved, there appears to be no other authority to whom one can turn for justice, and chaos would seem to be inevitable. Thus, in addition to advocating the abdication of democracy, it is now of central strategic importance that at the same time ideological support be given to all decentralizing or even secessionist social forces; that is, the tendency toward political centralization that has characterized the Western world for many centuries, first under monarchical rule and then under democratic auspices, must be systematically reversed.[51]
Even if as a result of a secessionist tendency a new government, whether democratic or not, should spring up, territorially smaller governments and increased political competition will tend to encourage moderation as regards exploitation. And in any case, only in small regions, communities or districts will it be possible again for a few individuals, based on the popular recognition of their economic independence, outstanding professional achievement, morally impeccable personal life, and superior judgment and taste, to rise to the rank of natural, voluntarily acknowledged authorities and lend legitimacy to the idea of a natural order of competing judges and overlapping jurisdictions — an "anarchic" private law society — as the answer to monarchy and democracy.

No comments: