How Political Bias Explains Everything

How Political Bias Explains Everything

WILFRED REILLY


Experts make judgments based on political attitudes that impact their reliability.
.

What determines leadership-level decisions, including those made by the Supreme Court? Personal attitudes, albeit somewhat constrained by individual rules and norms.ALEX WONG/GETTY IMAGES

According to the dogmas that currently rule America’s elite institutions, the single most important fact about any individual is their racial and gender identity. This quasi-religious belief results in conflict between the new identity-based framework and the older ideal that people are rational actors capable of arriving at an objective truth, independent of their personal background. But both of these views are wrong according to the attitudinal model, a paradigm that is popular in political science but widely ignored outside that discipline. Though it is not well known, the model almost perfectly explains the current “crisis of experts,” without resorting to the gaslighting and moral panics that so many “experts” have used to deny or explain away their failures.

Simply put, the attitudinal model is the codified idea that political preferences, especially when combined with a few other variables, generally predict how individuals will behave. The concept was first introduced by the political scientists Jeff Segal and Harold Spaeth, in their 1993 book The Supreme Court and the Attitudinal Model. Segal and Spaeth assert that the notion that decisions by leaders capable of independent action, a category that includes SCOTUS justices, “are objective, dispassionate, and impartial [is] obviously belied by the facts.” Clearly, “different courts and different judges do not decide the same issue the same way,” and even decisions from the same court are invariably larded with concurrences, dissenting opinions, and so forth. A key point these authors make is that there will generally be enough respected precedent cases available on all sides in a major legal matter—or enough potential variables available in the context of an academic model—that anyone intelligent could find “no dearth … to support their assertions.”

What, then, determines leadership-level decisions? Personal attitudes, albeit somewhat constrained by individual rules and norms. “Decisions of the Court are based on the facts of the case in light of the ideologies, attitudes, and values of the Justices,” Segal and Spaeth write. The authors test this claim empirically—that is why the book is famous—and find that the position of individual judicial decision-makers on a standard (-1 to 1) scale measuring personal conservatism/liberalism predicts roughly 80% (.79) of all of their votes. Across a set of prominent death penalty cases, the political-ideology metric – that is, a measure of the individual justices’ ideological leaning compiled from their past voting behavior, “newspaper editorials,” and “off-bench speeches and writings”—predicted the behavior of every SCOTUS Justice in 19 out of 23 situations.

Attitudinally driven behavior among leaders stretches far beyond Supreme Court justices or appellate court judges. Segal and Spaeth also find that ideology is a near total predictor of executive branch nominations of judges: 87% of all Supreme Court nominees (126/145 at the time of writing) have come from the sitting president’s party. In theory, we might like to believe that a president selects the judge they believe is most qualified for a position, but in practice we know that they simply pick the person whose political attitudes are closest to their own. This trend dates back to the very beginning of the United States, apparently: George Washington at one point nominated 11 highly partisan Federalists for the bench in a row.

Indeed, partisanship is a better predictor of being an elite judicial nominee than is “being a qualified judge,” as determined by past judicial service and players like the American Bar Association. Only 91 of the 145 Supreme Court nominees—73% of Republicans and 48% of Democratic picks—met the American Bar Association’s standard, Segal and Spaeth write. Similarly, basic ideological variables predict 95% of the Yes/No votes of senators deciding whether or not to confirm these presidential judicial nominees. Within the court system, the attitudinal model is measurably predictive beyond a few top benches: Segal and Spaeth note very early on that the model “will fully predict other courts to the extent the environment of those approximates that Supreme Court.”

The largely undisputed fact that ideology shapes the behavior of solo leaders matters because of the extreme trend toward siloing in modern upper-middle-class life. Within my field—the academic social sciences—a 2006 survey found that about 18% of all faculty members identified as Marxists, another 24% as radicals, and 20%-21% as activists. In contrast, perhaps 5% of American soft-scientists are conservatives. In an environment this politically slanted, the odds are good that many shifts of focus attributed to new theory or empirical data—and indeed many overall social science conclusions—are largely the products of ideology.

What are some examples of such conclusions? For decades, academics believed that authoritarianism was an almost exclusively conservative trait. The idea dates back to Frankfurt School scholar Theodor Adorno’s book The Authoritarian Personality, and dozens of studies have “confirmed” it over the years. However, in 2021, skilled Emory Ph.D. student Thomas Costello noticed something simple but key: Tools used to measure authoritarianism tend to be “designed from the left,” and to focus on social problems which a right-winger would be more likely to oppose.

A typical survey question might read: “How important do you feel it is that American society harshly control (Communists)?” Costello realized that scholars could as easily frame nearly identical items from the other direction, asking—hypothetically—about the need to crack down on “Insurrectionists” or “anti-maskers.” His published article, containing a left-wing authoritarianism scale more complex than what I have described here, but based on similar principles, was just published in the Journal of Personality and Social Psychology. It now appears likely that left-wing authoritarianism is one of the more common forms of authoritarianism.

Then there is “racial resentment.” For decades now, many political scientists have argued that citizens giving affirmative answers to questions like “Most Black people who receive money from welfare programs could get along without it if they tried (Yes or No)?” or “Italian, Irish, and Jewish ethnicities overcame prejudice and worked their way up—do you think Black people should do the same without any special favors?” provide a meaningful measure of the subtle racism that supposedly pervades American society. However, in recent years, skeptical scholars have begun administering the same racial resentment scales to minority Americans—most of whom score quite high on metrics of racial pride, and obviously almost none of whom are conventional bigots.

Results have been telling. According to a recent survey sponsored by the Kaiser Family Foundation and CNN, 42% of Black respondents believe “lack of motivation and willingness to work hard” is a “major cause” of hardships within the Black community, compared to 32% of white respondents who believe so. 61% of Black respondents, meanwhile, believe that “Breakup of the African American Family” was a “major cause” of those hardships, compared to roughly 55% of white respondents. Still another study, by Riley Carney and Ryan Enos, found rates of agreement with the provocative questions on the racial resentment scale did not change at all when lower-income or immigrant-origin white groups (i.e., Lithuanians) were substituted in for Black people. Dislike of affirmative action and welfare, it seems, correlates with conservatism and traditionalism across all groups, rather than with white racism.

In a thousand subtle ways, ideological bias can not only shape whole disciplines and domains of knowledge, but it can also weaponize scholarship against reality. To provide one example from my field: While the large numerical majority of police shooting victims in the U.S. are Caucasian, Black Americans are disproportionately likely to be shot by cops. We make up 13%-14% of the U.S. population, and roughly 25% of those fatally shot by law enforcement personnel in a typical year. However—and far fewer citizens know this—the Black violent crime rate is almost exactly 2.5 times the white violent crime rate, and any adjustment for this or for the racial difference in police encounter rate eliminates the discrepancy.

But many leftist academics have begun to argue that the crime rate disparity is simply itself more evidence of racism. Dr. Ibram Kendi, author of How to Be an Antiracist and a professor at American University, famously contends that any gap in performance between large groups must be due to systemic bias somewhere, and there are points that can be made about (say) differential enforcement of the United States’ drug laws. Though badly flawed, as I have noted elsewhere, nevertheless these arguments are widely accepted. And, whether a particular scholar concludes that patterns of American police violence are racist or not might well depend on whether or not she believes these claims and so excludes differential crime rates from her models as a predictor variable.

In this environment, a smart skeptic would expect that “solo leaders” in academia and the media will behave in much the same fashion as those sitting in the courts. Rather than presenting impartial empirical evidence, research results will often strongly reflect the ideological priors of those producing the research. Taking the very simple “crime rates” example given above, in a situation where the vast majority of academic sociologists lean to the political left, we would expect a comparable percentage of researchers to drop the crime-differential variable from their equations and thus conclude that American police operate in a racially biased fashion.

Let’s say that 90% of conservatives and Libertarians believe in a paradigm X (“Most policing is fair and nonbiased”), while 90% of leftists believe in paradigm Y (“All Western institutions are corrupt”), we would expect 87.3% of sociologists (.97 x .9) to believe in paradigm Y and to reason forward from it. As the examples and data given above indicate, considerable evidence exists that essentially this is true.

In a thousand subtle ways, ideological bias can not only shape whole disciplines and domains of knowledge, but it can also weaponize scholarship against reality.

But there is a bright spot to the discovery of entrenched ideological bias in academia. We can actually use attitudinal analysis to determine, with some accuracy, which ideas are truly bad. Citizens are frequently told that “the majority of the scholars in (Z) field” support one thing or another—with “gender affirming care” for minors being a recent example—and that the hoi polloi should not question the expert consensus. However, from an attitudinal perspective, whether such opinion majorities are relevant depends heavily upon the ideological priors of the experts in question. If field Z leans 85% to the left, and 90% of American leftists support transgender surgeries for minors, but only 60% of the .85 leftist pool of experts does, this actually indicates that gender affirming care is probably a terrible idea: Those most aware of the potential risks of the procedure are far more opposed to it than ideological peers with less empirical “inside information.”

Interestingly, something like this just occurred in the real world. The American Academy of Pediatrics (AAP) recently drew headlines after publicly reaffirming support for gender surgeries and hormone treatments for teenagers. However, the very left-leaning organization did so only after a hotly contested vote on an opposing resolution (“Addressing Alternatives to the Use of Hormone Therapies for Gender Dysphoric Youth”), which received 57 public endorsements from AAP members during the very brief period leading up to the referendum. Whatever their own politics may be, the nation’s leading academic pediatricians are by no means as actually unified on this issue as MSNBC makes them sound.

More broadly, a technique that could be used to develop a general attitudinal adjustment for field-specific bias is as follows: Simply determine (1) the L/R ideological breakdown of a particular academic field or sector, (2) the level of support for thing A within that sector, and (3) the level of support for thing A across all of the L/R ideological groups in society. This allows the calculation of (4) what level of support for thing A would almost certainly look like if the field ideologically matched society as a whole. Overall, we can probably say that popular niche ideas (“Defund and disarm the police”) that would be roundly rejected by any group that resembles the actual population are likely to be bad ones—and that ideas which are more often rejected than one would expect, even by partisan but experienced experts, are very likely to be bad ones.

But, in any case—while we’re calculating percentages—recall that there is a 100% chance that the output of any field at any time heavily reflects the ideological tastes of the very human people who make it up. We should recognize this, try to shift ideological monocultutres at the extremes, and never ignore reality.


Wilfred Reilly, a political science professor at Kentucky State University, is the author of Taboo: 10 Facts You Can’t Talk About.


Zawartość publikowanych artykułów i materiałów nie reprezentuje poglądów ani opinii Reunion’68,
ani też webmastera Blogu Reunion’68, chyba ze jest to wyraźnie zaznaczone.
Twoje uwagi, linki, własne artykuły lub wiadomości prześlij na adres:
webmaster@reunion68.com