We have been engaged in a long-term study of judicial voting patterns, and we recently published an oped in the Los Angeles Times, in which we gave “awards” to Supreme Court justices, based on a statistical study of their votes. The Judicial Neutrality Award went to Justice Anthony Kennedy. The Judicial Restraint award went to Justice Stephen Breyer. The less coveted Partisan Voting Award went to Justice Clarence Thomas. Justice Antonin Scalia received the Judicial Activism Award.
In various circles, our oped seems to have caused a bit of a stir – especially, we suspect, because Thomas emerges as the most partisan justice, and Scalia as the most activist. (But we did not spare liberal members of the Court; Justice John Paul Stevens was a close second for partisanship.) Our goals here are to offer a more detailed explanation of our method, to provide some general remarks on partisanship and activism on the Supreme Court, and to respond to some criticisms.
Our focus was on the justices’ review of the legal interpretations of federal agencies, such as the Environmental Protection Agency, the Occupational Safety and Health Administration, the Federal Communications Commission, and the National Labor Relations Board. These are not the high-profile constitutional cases, but for purposes of doing a statistical analysis of judicial votes, they provide an unusually good data set. The Supreme Court’s leading decision, Chevron USA, Inc. v. NDRC, Inc., 467 U.S. 837 (1984), commands judges to uphold agency interpretations of law, so long as those interpretations are “reasonable.” With this command, the Court has long insisted that courts should usually respect the decisions of the executive branch -- unless those decisions are plainly inconsistent with law.
This principle makes it possible to test for both judicial neutrality and judicial restraint. Suppose that a justice upholds liberal and conservative interpretations at the same rate. If so, the justice seems pretty neutral. Or suppose that a justice votes in favor of liberal agency interpretations far more often than he votes in favor of conservative agency decisions. If so, the justice seems pretty partisan. We can measure activism and restraint in a similar way. A judge who is unusually willing to uphold agency decisions can be counted as restrained. A judge who is unusually willing to strike down those decisions can be counted as activist. (Partisanship is hard to defend but true, an activist judge might be right; our data do not permit a clear answer to that question.) It is in this light that Kennedy and Breyer win the desirable awards, while Thomas and Scalia get the less desirable ones.
Our analysis elicited rapid and vociferous criticism from many people, including Edward Whelan in the pages of the L.A. Times itself and from a chorus of skeptics on the internet (http://volokh.com/posts/1193252908.shtml; http://volokh.com/posts/1193253032.shtml).
The critics contend, rightly, that we do not look at the high-profile constitutional cases. But the number of such cases is small, and it isn’t easy to test competing hypotheses about partisanship and restraint. Whelan argues that we fail to examine whether the agency ruling is correct. We agree that an ideal measure of judicial activism would identify the situations in which judges pursue their own ideological goals at the expense of the “correct” legal outcome. Many studies have demonstrated that ideology influences judicial decision-making in a vast range of legal contexts. But these studies generally provide no measure of the correctness of the judges’ decisions. The absence of a “correctness metric” shows that it is most difficult to measure correctness in a way that can produce empirical studies of competing hypotheses.
We chose to investigate the justices’ votes in challenges to administrative agencies’ interpretations of law because this context provides an excellent way of testing for both partisanship and activism. The Court’s own decision in the Chevron case strongly suggests that a justice’s willingness to uphold an agency’s interpretation of should not depend on whether the agency’s decision was liberal or conservative. We think that our approach is an innovation over the existing academic literature, and we know that it is a vast improvement over unsubstantiated, anecdote-driven claims about judicial behavior.
The critics allege that the design of our study is flawed because the distinctive context of agency decisions makes it more likely that conservative judges will appear activist. If the data sets include mostly liberal decisions, then of course a liberal justice will show a higher validation rate than a conservative justice. But this objection is misconceived. In addition to measuring overall rates of agency validation for the justices, we also examined whether each justice was more likely to favor an agency when the agency decision was liberal rather than conservative. We coded the political orientation of each agency decision according to an objective method used by several prior academic studies. Agency decisions challenged by industry were deemed liberal, and those challenged by public interest groups were coded conservative. If the distribution of agency decisions were skewed in a liberal direction, as some critics allege, we should have observed few or even no challenges from public interest groups. Instead, we observed a fair number of such challenges. Moreover, our study period included many decisions from both the Clinton and the Bush administration, and it would be a big surprise if decisions by the latter were mostly “liberal.”
When we looked at the data, we observed two key facts. (1) Certain justices’ rates of validation – but not others -- varied widely with their own political leanings. (2) Certain justices’ rates of validation – but not others -- rose when the agency interpretation agreed with their political leanings and fell when it disagreed. These two patterns suggest that certain justices are, according to this imprecise metric, reaching decisions that were likely not correct. Moreover, the patterns strongly suggest that partisanship or ideology influenced certain decisions. (Justice Thomas is the prize-winner for partisanship, but Justice Stevens is a close second.) Judicial ideology appeared to influence some justices’ votes in the very context in which courts ought to defer to agencies. By our measure, these patterns smack of judicial activism.
A minor criticism lodged by Mr. Whelan and others is that our conclusions would differ if we limited our analysis to the period after 1994, when Justices Breyer and Ginsburg had joined the Court. This criticism is unwarranted. When we limit our analysis to the period after 1994, similar patterns emerge. Justice Breyer remains the most restrained in this period as well; he upheld agency decisions 84% of the time. Justice Scalia remains the most aggressive user of judicial power; in this period, he validated the agencies only 52% of the time.
With regard to partisanship, defined as political skew in a judge’s voting pattern, our results differ only slightly when we limit attention to the later period. Justice Scalia now edges out Justice Thomas for appearing the most partisan. When the agency decision was conservative, Justice Scalia voted to validate the agency 93% of the time, and when it was liberal, he voted to validate it only 33% of the time – a swing of 60 percentage points. Justice Thomas was a close runner-up in this period with a swing of 56 percentage points. Interestingly, Justice Souter was least partisan in this period – with a swing of fewer than 2 percentage points. Justices Kennedy and Ginsburg were runners up for most neutral; each had swings – but in opposite directions – of 12 percentage points.
The claim that a judge is a “partisan,” or an “activist,” might be merely an inflammatory way of saying that a judge is “wrong.” But it is useful to develop actual hypotheses about partisanship and activism -- and to see what the evidence reveals. We have used evidence from the justices’ own voting records in a unique area, in which the legal rules are fairly well-defined. Of course we could imagine other hypotheses and other tests. We invite skeptics to muster their own evidence rather than to continue to rely on anecdote, speculation, innuendo, and name-calling.