Benched
The Supreme Court and the struggle for judicial independence.
by Jill Lepore June 18, 2012
Originally, the Supreme Court
of the United States met in a drafty room on the second floor of an old
stone building called the Merchants’ Exchange, at the corner of Broad
and Water Streets, in New York. The ground floor, an arcade, was a stock
exchange. Lectures and concerts were held upstairs. For meeting, there
weren’t many places to choose from. Much of the city had burned to the
ground during the Revolutionary War; nevertheless, New York became the
nation’s capital in 1785. After George Washington was inaugurated in
1789, he appointed six Supreme Court Justices—the Constitution doesn’t
say how many there ought to be—but on February 1, 1790, the first day
the Court was called to session, upstairs in the Exchange, only three
Justices showed up and so, lacking a quorum, court was adjourned.
Months
later, when the nation’s capital moved to Philadelphia, the Supreme
Court met in City Hall, where it shared quarters with the mayor’s court.
Not long after, the Chief Justice, John Jay, wrote to the President to
let him know that he was going to skip the next session because his wife
was having a baby (“I cannot prevail on myself to be then at a Distance
from her,” Jay wrote to Washington), and because there wasn’t much on
the docket, anyway. This spring, the Supreme Court—now housed in a building so ostentatious that Justice Louis Brandeis, who, before he was appointed to the bench, in 1916, was known as “the people’s attorney,” refused to move into his office—is debating whether the Affordable Care Act violates the Constitution, especially with regard to the word “commerce.” Arguments were heard in March. The Court’s decision will be final. It is expected by the end of the month.
Under
the Constitution, the power of the Supreme Court is quite limited. The
executive branch holds the sword, Alexander Hamilton wrote in the
Federalist No. 78, and the legislative branch the purse. “The judiciary,
on the contrary, has no influence over either the sword or the purse;
no direction either of the strength or of the wealth of the society; and
can take no active resolution whatever.” All judges can do is judge.
“The judiciary is beyond comparison the weakest of the three departments
of powers,” Hamilton concluded, citing, in a footnote, Montesquieu: “Of
the three powers above mentioned, the judiciary is next to nothing.”
The
Supreme Court used to be not only an appellate court but also a trial
court. People also thought it was a good idea for the Justices to ride
circuit, so that they’d know the citizenry better. That meant more time
away from their families, and, besides, getting around the country was a
slog. Justice James Iredell, who said he felt like a “travelling
postboy,” nearly broke his leg when his horse bolted. Usually, he had to
stay at inns, where you shared rooms with strangers. The Justices hated
riding circuit and, in 1792, petitioned the President to relieve them
of the duty, writing, “We cannot reconcile ourselves to the idea of
existing in exile from our families.” Washington, who was childless, was
unmoved.123In 1795, when John Jay resigned from the office of Chief Justice to become governor of New York, Washington asked Alexander Hamilton to take his place; Hamilton said no. So did Patrick Henry. Anyone who wanted the job had to be a little nutty. The Senate rejected Washington’s next nominee for Jay’s replacement, the South Carolinian John Rutledge, whereupon Rutledge tried to drown himself near Charleston, crying out to his rescuers that he had been a judge for a long time and “knew of no Law that forbid a man to take away his own Life.”
In 1800, the capital moved to Washington, D.C., and the following year John Adams nominated his Secretary of State, the arch-Federalist Virginian John Marshall, to the office of Chief Justice. Adams lived in the White House. Congress met at the Capitol. Marshall took his oath of office in a “meanly furnished, very inconvenient” room in the Capitol Building, where the Justices, who did not have clerks, had no room to put on their robes (this they did in the courtroom, in front of gawking spectators), or to deliberate (this they did in the hall, as quietly as they could). Cleverly, Marshall made sure that all the Justices rented rooms at the same boarding house, so that they could at least have someplace to talk together, unobserved.
Marshall was gangly and quirky and such an avid listener that Daniel Webster once said that, on the bench, he took in counsel’s argument the way “a baby takes in its mother’s milk.” He became Chief Justice just months before Thomas Jefferson became President. Marshall was Jefferson’s cousin and also his fiercest political rival, if you don’t count Adams. Nearly the last thing Adams did before leaving office was to persuade the lame-duck Federalist Congress to pass the 1801 Judiciary Act, reducing the number of Supreme Court Justices to five—which would have prevented Jefferson from naming a Justice to the bench until two Justices left. The newly elected Republican Congress turned right around and repealed that act and suspended the Supreme Court for more than a year.
In February, 1803, when the Marshall Court finally met, it did something really interesting. In Marbury v. Madison, a suit against Jefferson’s Secretary of State, James Madison, Marshall granted to the Supreme Court a power it had not been explicitly granted in the Constitution: the right to decide whether laws passed by Congress are constitutional. This was such an astonishing thing to do that the Court didn’t declare another federal law unconstitutional for fifty-four years.
The
Supreme Court’s decision about the constitutionality of the Affordable
Care Act will turn on Article I, Section 8, of the Constitution, the
commerce clause: “Congress shall have power . . . to regulate Commerce
with foreign Nations, and among the several States, and with the Indian
Tribes.” In Gibbons v. Ogden, Marshall interpreted this clause broadly:
“Commerce, undoubtedly, is traffic, but it is something more: it is
intercourse.” (“Intercourse” encompassed all manner of dealings and
exchanges: trade, conversation, letter-writing, and even—if plainly
outside the scope of Marshall’s meaning—sex.) Not much came of this
until the Gilded Age, when the commerce clause was invoked to justify
trust-busting legislation, which was generally upheld. Then, during the
New Deal, the “power to regulate commerce,” along with the definition of
“commerce” itself, became the chief means by which Congress passed
legislation protecting people against an unbridled market; the Court
complied only after a protracted battle. In 1964, the commerce clause
formed part of the basis for the Civil Rights Act, and the Court upheld
the argument that the clause grants Congress the power to prohibit
racial discrimination in hotels and restaurants.
In 1995, in U.S.
v. Lopez, the Court limited that power for the first time since the
battle over the New Deal, when Chief Justice William Rehnquist, writing
for the majority, overturned a federal law prohibiting the carrying of
guns in a school zone: the argument was that gun ownership is not
commerce, because it “is in no sense an economic activity.” (In a
concurring opinion, Justice Clarence Thomas cited Samuel Johnson’s
Dictionary of the English Language.) Five years later, in U.S. v.
Morrison, Rehnquist, again writing for the majority, declared parts of
the federal Violence Against Women Act unconstitutional, arguing, again,
that no economic activity was involved.However the Court rules on health care, the commerce clause appears unlikely, in the long run, to be able to bear the burdens that have been placed upon it. So long as conservatives hold sway on the Court, the definition of “commerce” will get narrower and narrower, despite the fact that this will require, and already has required, overturning decades of precedent. Unfortunately, Article I, Section 8, may turn out to have been a poor perch on which to build a nest for rights.
There is more at stake, too. This Court has not been hesitant about exercising judicial review. In Marshall’s thirty-five years as Chief Justice, the Court struck down only one act of Congress. In the seven years since John G. Roberts, Jr., became Chief Justice, in 2005, the Court has struck down a sizable number of federal laws, including one reforming the funding of political campaigns. It also happens to be the most conservative court in modern times. According to a rating system used by political scientists, decisions issued by the Warren Court were conservative thirty-four per cent of the time; the Burger and the Rehnquist Courts issued conservative decisions fifty-five per cent of the time. So far, the rulings of the Roberts Court have been conservative about sixty per cent of the time.
What people think about judicial review usually depends on what they think about the composition of the Court. When the Court is liberal, liberals think judicial review is good, and conservatives think it’s bad. This is also true the other way around. Between 1962 and 1969, the Warren Court struck down seventeen acts of Congress. (“With five votes, you can do anything around here,” Justice William Brennan said at the time.) Liberals didn’t mind; the Warren Court advanced civil rights. Conservatives argued that the behavior of the Warren Court was unconstitutional, and, helped along by that argument, gained control of the Republican Party and, eventually, the Supreme Court, only to engage in what looks like the very same behavior. Except that it isn’t quite the same, not least because a conservative court exercising judicial review in the name of originalism suggests, at best, a rather uneven application of the principle.
The commerce clause has one history, judicial review another. They do, however, crisscross. Historically, the struggle over judicial review has been part of a larger struggle over judicial independence: the freedom of the judiciary from the other branches of government, from political influence, and, especially, from moneyed interests, which is why the Court’s role in deciding whether Congress has the power to regulate the economy is so woefully vexed.
Early
American colonists inherited from England a tradition in which the
courts, like the legislature, were extensions of the crown. In most
colonies, as the Harvard Law professor Jed Shugerman points out in “The
People’s Courts: Pursuing Judicial Independence in America” (Harvard),
judges and legislators were the same people and, in many, the
legislature served as the court of last resort. (A nomenclatural vestige
of this arrangement remains in Massachusetts, where the state
legislature is still called the General Court.)
In 1733, William
Cosby, the royally appointed governor of New York, sued his predecessor,
and the case was heard by the colony’s Supreme Court, headed by Lewis
Morris, who ruled against Cosby, whereupon the Governor removed Morris
from the bench and appointed James DeLancey. When essays critical of the
Governor appeared in a city newspaper, Cosby arranged to have the
newspaper’s printer, John Peter Zenger, tried for sedition. At the
trial, Zenger’s attorneys objected to the Justices’ authority, arguing
that justice cannot be served by “the mere will of a governor.” Then
DeLancey simply ordered Zenger’s attorneys disbarred.Already in England, a defiant Parliament had been challenging the royal prerogative, demanding that judicial appointments be made not “at the king’s pleasure” but “during good behavior” (effectively, for life). Yet reform was slow to reach the colonies, and a corrupt judiciary was one of the abuses that led to the Revolution. In 1768, Benjamin Franklin listed it in an essay called “Causes of American Discontents,” and, in the Declaration of Independence, Jefferson included on his list of grievances the king’s having “made Judges dependent on his Will alone.”
The principle of judicial independence is related to another principle that emerged during these decades, much influenced by Montesquieu’s 1748 “Spirit of Laws”: the separation of powers. “The judicial power ought to be distinct from both the legislative and executive, and independent,” Adams argued in 1776, “so that it may be a check upon both.” There is, nevertheless, a tension between judicial independence and the separation of powers. Appointing judges to serve for life would seem to establish judicial independence, but what power then checks the judiciary? One idea was to have the judges elected by the people; the people then check the judiciary.
At the Constitutional Convention, no one argued that the Supreme Court Justices ought to be popularly elected, not because the delegates were unconcerned about judicial independence but because there wasn’t a great deal of support for the popular election of anyone, including the President (hence, the electoral college). The delegates quickly decided that the President should appoint Justices, and the Senate confirm them, and that these Justices ought to hold their appointments “during good behavior.”
Amid the debate over ratification, this proved controversial. In a 1788 essay called “The Supreme Court: They Will Mould the Government into Almost Any Shape They Please,” one anti-Federalist pointed out that the power granted to the Court was “unprecedented in any free country,” because its Justices are, finally, answerable to no one: “No errors they may commit can be corrected by any power above them, if any such power there be, nor can they be removed from office for making ever so many erroneous adjudications.” This is among the reasons that Hamilton found it expedient, in the Federalist No. 78, to emphasize the weakness of the judicial branch.
Jefferson, after his battle with Marshall, came to believe that “in a government founded on the public will, this principle operates . . . against that will.” In much that same spirit, a great many states began instituting judicial elections, in place of judicial appointment. You might think that elected judges would be less independent, more subject to political forces, than appointed ones. But timeless political truths are seldom true and rarely timeless. During the decades that reformers were lobbying for judicial elections, the secret ballot was thought to be more subject to political corruption than voting openly. Similarly, the popular vote was considered markedly less partisan than the spoils system: the lesser, by far, of two evils.
Nor was the nature of the Supreme Court set in stone. In the nineteenth century, the Court was, if not as weak as Hamilton suggested, nowhere near as powerful as it later became. In 1810, the Court moved into a different room in the Capitol, where a figure of Justice, decorating the chamber, had no blindfold but, as the joke went, the room was too dark for her to see anything anyway. It was also dank. “The deaths of some of our most talented jurists have been attributed to the location of this Courtroom,” one architect remarked. It was in that dimly lit room, in 1857, that the Supreme Court overturned a federal law for the first time since Marbury v. Madison. In Dred Scott v. Sandford, Chief Justice Roger B. Taney, writing for the majority, voided the Missouri Compromise by arguing that Congress could not prohibit slavery in the territories.
In 1860, the Court moved once more, into the Old Senate Chamber. When Abraham Lincoln was inaugurated, on the East Portico of the Capitol, Taney administered the oath, and Lincoln, in his address, confronted the crisis of constitutional authority. “I do not forget the position, assumed by some, that constitutional questions are to be decided by the Supreme Court,” he said, but “if the policy of the government, upon vital questions affecting the whole people, is to be irrevocably fixed by the decisions of Supreme Court, the instant they are made . . . the people will have ceased to be their own rulers, having to that extent, practically resigned their government into the hands of that eminent tribunal.” Five weeks later, shots were fired at Fort Sumter.
In the decades following the Civil War, an increasingly activist Court took up not only matters relating to Reconstruction, and especially to the Fourteenth Amendment, but also questions involving the regulation of business, not least because the Court ruled that corporations could file suits, as if they were people. And then, beginning in the eighteen-nineties, the Supreme Court struck down an entire docket of Progressive legislation, including child-labor laws, unionization laws, minimum-wage laws, and the progressive income tax. In Lochner v. New York (1905), in a 5–4 decision, the Court voided a state law establishing that bakers could work no longer than ten hours a day, six days a week, on the ground that the law violated a “liberty of contract,” protected under the Fourteenth Amendment. In a dissenting opinion, Justice Oliver Wendell Holmes accused the Court of wildly overreaching its authority. “A Constitution is not intended to embody a particular economic theory,” he wrote.
For a long time, legal scholars agreed with Holmes. But in “Rehabilitating Lochner: Defending Individual Rights Against Progressive Reform” (Chicago), David E. Bernstein, a law professor at George Mason University, takes issue with the logic by which Lochner has become “likely the most disreputable case in modern constitutional discourse.” Bernstein’s measured plea that Lochner be treated “like a normal, albeit controversial, case” is perfectly sensible; less persuasive is his argument that, by favoring individual rights over government regulation, Lochner-era rulings protected the interest of minorities.
Lochner led to an uproar. In 1906, Roscoe Pound, the eminent legal scholar and later dean of Harvard Law School, delivered an address before the American Bar Association called “The Causes of Popular Dissatisfaction with the Administration of Justice,” in which he echoed Holmes’s dissent in Lochner. “Putting courts into politics, and compelling judges to become politicians, in many jurisdictions has almost destroyed the traditional respect for the Bench,” he warned. Bernstein waves this aside, arguing that Pound didn’t wholly comprehend the facts of the case, and insists that any discontent with the Court’s ruling in Lochner abated almost immediately. It remains, however, that Lochner, together with a host of other federal- and state-court rulings, contributed to a surge of popular interest in judicial independence, including calls for “judicial removal”: the firing of judges by a simple majority of the legislature. In 1911, Arizona, preparing to enter the union, had a proposed constitution that included judicial recall, the removal of judges by popular vote, which was also a platform of Theodore Roosevelt’s Bull Moose campaign. The U.S. Congress approved the state’s constitution, but when it went to the White House William Howard Taft vetoed it. He objected to recall. Before he became President, Taft was a judge. He wanted not less judicial power but more. The next year, Taft began lobbying Congress for funds to erect for the Supreme Court a building of its own.
On October 13, 1932, Herbert Hoover
laid the cornerstone, at a construction site across from the Capitol.
The plan was to build the greatest marble building in the world; marble
had been shipped from Spain, Italy, and Africa. At the ceremony, after
Hoover emptied his trowel, Chief Justice Charles Evans Hughes delivered
remarks recalling the Court’s long years of wandering. “The court began
its work as a homeless department of the government,” Hughes said, but
“this monument bespeaks the common cause, the unifying principle of our
nation.”
In 1906, Hughes had run for governor of New York against
William Randolph Hearst; as against Hearst’s five hundred thousand
dollars, Hughes spent six hundred and nineteen dollars. Miraculously, he
won. Once in office, he pushed through the state legislature a
campaign-spending limit. In 1910, Taft appointed Hughes to the Supreme
Court, where, as a champion of civil liberties, he often joined with
Holmes in dissent. Hughes resigned from the bench in 1916 to run for
President; he lost, narrowly, to Woodrow Wilson. After serving as
Secretary of State under Warren G. Harding and Calvin Coolidge, he was
appointed Chief Justice in 1930. Three weeks after Hoover laid the cornerstone for the new Supreme Court Building, F.D.R. was elected President, defeating the incumbent by a record-breaking electoral vote: 472 to 59. As the New York Law School professor James F. Simon chronicles in “F.D.R. and Chief Justice Hughes: The President, the Supreme Court, and the Epic Battle Over the New Deal” (Simon & Schuster), the President-elect immediately began lining up his legislative agenda. He met with Holmes, who told him, “You are in a war, Mr. President, and in a war there is only one rule, ‘Form your battalion and fight!’ ”
By June of 1933, less than a hundred days after his Inauguration, F.D.R. had proposed fifteen legislative elements of his New Deal, all having to do with the federal government’s role in the regulation of the economy—and, therefore, with the commerce clause—and each had been made law. Now the New Deal had to pass muster in Hughes’s court, where four conservative Justices, known as the Four Horsemen, consistently voted in favor of a Lochnerian liberty of contract, while the three liberals—Louis Brandeis, Benjamin Cardozo, and Harlan Fiske Stone—generally supported government regulation. That left Hughes and Owen Roberts. In early rulings, Hughes and Roberts joined the liberals, and the Court, voting 5–4, let New Deal legislation stand. “While an emergency does not create power,” Hughes said, “an emergency may furnish the occasion for the exercise of the power.”
In the January, 1935, session, the Court heard arguments in another challenge. F.D.R., expecting an adverse decision, prepared a speech in which he quoted Lincoln’s remarks about Dred Scott, adding, “To stand idly by and to permit the decision of the Supreme Court to be carried through to its logical, inescapable conclusion” would “imperil the economic and political security of this nation.” The speech was never given. In another 5–4 decision, Hughes upheld F.D.R.’s agenda, leading one of the horsemen to burst out, “The Constitution is gone!”—a comment so unseemly that it was stricken from the record.
On May 27, 1935—afterward known as Black Monday—the Supreme Court met, for very nearly the last time, in the Old Senate Chamber. In three unanimous decisions, the Court devastated the New Deal. Most critically, it found that the National Recovery Administration, which Roosevelt had called the “most important and far-reaching legislation in the history of the American Congress,” was unconstitutional, because Congress had exceeded the powers granted to it under the commerce clause. Four days later, the President held a press conference in the Oval Office. He compared the gravity of the decision to Dred Scott. Then he raged, “We have been relegated to the horse-and-buggy definition of interstate commerce.” But, in the horse-and-buggy days, the Court didn’t have half as much power as it had in 1935.
The Supreme Court’s new building
opened six months later, on October 7, 1935. A pair of reporters
described the place as “a classical icebox decorated for some surreal
reason by an insane upholsterer.” Nine Justices took their seats in the
same raggedy assortment of chairs they had used in the Senate Chamber.
Asked whether he wanted a new chair, Justice Cardozo had refused. “No,”
he replied slowly, “if Justice Holmes sat in this chair for twenty
years, I can sit in it for a while.”
And then the Hughes Court
went on a spree. In eighteen months, it struck down more than a dozen
laws. Congress kept passing them; the Court kept striking them down,
generally 5-4. At one point, F.D.R.’s Solicitor General fainted, right
there in the courtroom. The President began entertaining proposals about fighting back. One senator had an idea. “It takes twelve men to find a man guilty of murder,” he said. “I don’t see why it should not take a unanimous court to find a law unconstitutional.” That might have required a constitutional amendment, a process that is notoriously corruptible. “Give me ten million dollars,” Roosevelt said, “and I can prevent any amendment to the Constitution from being ratified by the necessary number of states.”
Meanwhile, the President was running for reĆ«lection. A week before Election Day, an attack on the Hughes Court, titled “The Nine Old Men,” began appearing in the nation’s newspapers and in bookstores. F.D.R. defeated the Republican, Alf Landon, yet again breaking a record in the electoral college: 523 to 8. In February, 1937, Roosevelt floated his plan: claiming that the Justices were doddering, and unable to keep up with the business at hand, he would name an additional Justice for every sitting Justice over the age of seventy. There were six of them, including the Chief Justice, who was seventy-four.
The President’s approval rating fell. In a radio address on March 9, 1937, he argued that the time had come “to save the Constitution from the Court, and the Court from itself.” Then Hughes all but put the matter to rest. “The Supreme Court is fully abreast of its work,” he reported on March 22nd, in a persuasive letter to the Senate Judiciary Committee. If efficiency were actually a concern, he argued, there was a great deal of evidence to suggest that more Justices would only slow things down.
What happened next is clear: starting with West Coast Hotel Co. v. Parrish, a ruling issued on March 29, 1937, in a 5–4 opinion written by Hughes, that sustained a minimum-wage requirement, the Supreme Court began upholding the New Deal. Owen Roberts had switched sides, a move so sudden, and so crucial to the preservation of the Court, that it has been called “the switch in time that saved nine.” Why this happened is not quite as clear. It looked purely political. “Even a blind man ought to see that the Court is in politics,” Felix Frankfurter wrote to Roosevelt. “It is a deep object lesson—a lurid demonstration—of the relation of men to the ‘meaning’ of the Constitution.” It wasn’t as lurid as all that; it had at least something to do with the law.
On May 18, 1937, the Senate Judiciary Committee voted against the President’s proposal. The court-packing plan was dead. Six days later, the Supreme Court upheld the old-age-insurance provisions of the Social Security Act. The President, and his deal, had won.
On
either side of the Supreme Court steps, on top of fifty-ton marble
blocks, sits a sculpted figure: the Contemplation of Justice, on the
left, and the Authority of Law, on the right. In the pediment above the
portico, Liberty gazes into the future; Charles Evans Hughes crouches by
her side. Inside, a bronze statue of John Marshall stands in the Lower
Great Hall. Above him, etched into marble, are his remarks from Marbury
v. Madison: “It is emphatically the province and duty of the judicial
department to say what the law is.”
Within the walls of that
building, Dred Scott is nowhere to be found, and Lochner stalks the
halls like a ghost. Portraits of the first Chief Justices, starting with
John Jay, hang in the East Conference Room, and of the later Justices,
in the West. A portrait of Earl Warren was installed after his death, in
1974. Beginning with the Court’s ruling in Brown v. Board of Education,
in 1954, Warren presided over the most activist liberal court in
American history. “I would like this court to be remembered as the
people’s court,” Warren said when he retired, in 1969. He was pointing
to the difference between conservative judicial activism and liberal
judicial activism: one protects the interests of the powerful and the
other those of the powerless.The Supreme Court has been deliberating in a temple of marble for three-quarters of a century. In March, it heard oral arguments about the Affordable Care Act. No one rode there in a horse and buggy. There was talk, from the bench, of heart transplants, and of a great many other matters unthinkable in 1789. Arguments lasted for three days. On the second day, the Solicitor General insisted that the purchase of health insurance is an economic activity. Much discussion followed about whether choosing not to buy health insurance is an economic activity, too, and one that Congress has the power to regulate. If you could require people to buy health insurance, Justice Antonin Scalia wanted to know, could you require them to buy broccoli? “No, that’s quite different,” the Solicitor General answered. “The food market, while it shares that trait that everybody’s in it, it is not a market in which your participation is often unpredictable and often involuntary.” This did not appear to satisfy.
The ruling that the Supreme Court hands down this month will leave unanswered questions about the relationship between the judicial and the legislative branches of government, and also between the past and the present. The separation of law from politics for which the Revolution was fought has proved elusive. That’s not surprising—no such separation being wholly possible—but some years have been better than others. One of the worst was 2000, when the Court determined the outcome of a disputed Presidential election. The real loser in that election, Justice John Paul Stevens said in his dissent in Bush v. Gore, “is the Nation’s confidence in the judge as an impartial guardian of the rule of law.”
For centuries, the American struggle for a more independent judiciary has been more steadfast than successful. Currently, nearly ninety per cent of state judges run for office. “Spending on judicial campaigns has doubled in the past decade, exceeding $200 million,” Shugerman reports. In 2009, after three Iowa supreme-court judges overturned a defense-of-marriage act, the American Family Association, the National Organization for Marriage, and the Campaign for Working Families together spent more than eight hundred thousand dollars to campaign against their reĆ«lection; all three judges lost. “I never felt so much like a hooker down by the bus station,” one Ohio supreme-court justice told the Times in 2006, “as I did in a judicial race.”
Federally, few rulings have wreaked such havoc on the political process as the 2010 case Citizens United v. Federal Election Commission, whereby the Roberts Court struck down much of the McCain-Feingold Act, which placed restrictions on corporate and union funding of political campaigns. Stevens, in his dissent, warned that “a democracy cannot function effectively when its constituent members believe laws are being bought and sold.”
That, in the end, is the traffic to worry about. If not only legislators but judges serve at the pleasure of lobbyists, the people will have ceased to be their own rulers. Law will be commerce. And money will be king. ♦
No comments:
Post a Comment