The United States Senate is an institution steeped in tradition: From the filibuster to the amendment process, senators on both sides of the aisle take pride in their chamber’s status as the “world’s greatest deliberative body.” Among the lesser known of these conventions is the century-old blue slip. A blue slip is, literally, a small blue form that allows a senator from either party to halt proceedings on a judge nominated to a federal court within their state. During the Obama administration, Senate Majority Leader Mitch McConnell, a Republican from Kentucky, employed the procedure to block 18 judicial nominees from reaching the bench. On the surface, this tactic can be seen as a purely partisan tool for obstructing a president’s agenda. However, blue slips play an integral role in preventing majority dominance over the judiciary. The blue slip tradition is among the last of important, if under-examined, traditions that protect minority power in Congress.

The significance of blue slips makes Senator McConnell’s recent comments calling for their removal particularly concerning. He voiced his discontent with Democrats’ use of the tradition to block President Trump’s judicial nominees, saying that blue slips should be used as “notification of how you’re going to vote, not as an opportunity to blackball.” Although a spokesman for Senator McConnell later walked back these comments and indicated that McConnell was merely talking about his own view on the tradition, the statements remain alarming.

McConnell faces immense pressure from conservative groups that are eager to push through President Trump’s judicial nominees. The unified GOP Senate and White House represent an opportunity for conservatives to pack federal courts with judges who, because of lifetime appointments, would be able to protect conservative policies for decades, even in increasingly purple states like Texas and Arizona where Republicans may soon lose their dominance. A recent Conservative Action Project memo demanded that McConnell require senators to move through Trump’s backlog of judicial hopefuls. The Judicial Crisis Network even planned to spend hundreds of thousands of dollars on a publicity campaign calling on McConnell to change Senate rules to expedite the judicial confirmation process. While McConnell successfully avoided a public showdown with fellow conservatives and negotiated an end to the potential ad campaign, this outside pressure is a serious threat to the independence of the Senate. Given McConnell’s previous statements and immense interest group pressure, it’s not hard to envision a scenario where this senatorial courtesy falls by the wayside.

Tearing up the blue slip would represent the latest development in a recent history of judicial nominations as a battleground for partisan politics. In 2013, Senate Democrats, frustrated with the Republican filibustering of President Obama’s court nominees, invoked the so-called “nuclear option” to end the filibuster for all judicial nominees aside from those to the Supreme Court. Using the nuclear option proved to be a costly mistake for Democrats, who, now in the minority, saw Republicans retaliate by circumventing the filibuster on Neil Gorsuch’s nomination to the Supreme Court. This came after the failed nomination of Merrick Garland, during which McConnell’s refusal to move forward with the Obama nominee allowed Republicans to usurp a Supreme Court seat from Democrats. This massive payoff came from a risky political gamble.

Moderation and barriers to the majority agenda are integral to the constitutional design of the Senate. Ridding this legislative body of procedures that give the opposition power is to destroy one of its central tenets. The purpose of a congressional chamber separate from the House of Representatives was that the Senate, with its members serving six-year terms, would be insulated from the fads of public opinion. This insulation contrasts starkly with the biennially elected House of Representatives. As a result, the Senate became the deliberative entity where a small group of statesmen could craft legislation with compromise and moderation in mind, serving as an even-tempered counterpart to the people’s House. The founders were wise to create the Senate with compromise and restraint at its core to ensure that the many could never infringe upon the rights of the few via majority rule. The blue slip tradition, like the filibuster, is in line with this founding principle of American democracy as a bulwark for political minorities’ rights. The actions of both parties in rendering the filibuster useless regarding judicial nominees has eroded the institution’s tendency toward moderation, enabling whichever party that controls the body to have undue power over the direction of America’s judiciary.

Without the filibuster, blue slips are the Senate minority’s last defense against unfettered one-party domination of the judicial nomination process. Both parties have expressed interest in preserving the tradition of senatorial consultation—albeit more strongly when they were in the minority and were protecting their own ability to have influence. In a recent memo, Senator Dianne Feinstein (D-California) outlined the history of the blue slip’s importance in forcing the White House to work with the Senate on judicial nominations. She also provided examples of the 18 separate occasions during the Obama presidency that Republicans blocked an appointee via blue slipping without Democrats challenging the validity of the practice. Among these blocked nominees was Justice Lisabeth Hughes, who President Obama nominated to the Sixth Circuit Court of Appeals in Kentucky. Senator McConnell, invoking his home-state privilege, blue slipped Justice Hughes after President Obama refused to nominate his pick, Judge Amul Thapar. By drawing out the nomination until after Trump’s inauguration, the Majority Leader successfully lobbied the president on Thapar’s behalf. Thapar was ultimately confirmed.

Although this scenario was less than ideal for Democrats, it shows the blue slip’s tendency to give minorities power to affect who sits on the bench. However, now that Republicans control the Senate, McConnell has developed the personal belief that blue slips shouldn’t be used to hold up a nominee. A 2009 letter to former President Obama from the entire Republican caucus called on the president to recognize “the importance of maintaining this principle, which allows individual senators to provide valuable insights into their constituents’ qualifications.” With every GOP Senator in the caucus publicly defending the blue slip in 2009, any deviation from this stance recognizing the importance and necessity of the process is seemingly motivated by the prospect of a partisan power grab.

Ultimately, the decision to gut the efficacy of blue slips lies with the Chairman of the Senate Judiciary Committee, Chuck Grassley (R-Iowa). Since blue slips are a tradition, not something codified in Senate rules, the Senate Judiciary Committee is responsible for applying the practice as it sees fit. A spokesman for Grassley said, “over the years, chairmen have applied the courtesy differently, but the spirit of consultation has always remained.” This indicated that Grassley was disinclined to end the custom. A six-term veteran of the Senate, Grassley has been around long enough to remember a time during which partisan rancor was the exception, not the norm. He should understand the importance of the Senate as an environment that fosters compromise.

However, his senatorial experience has not always stopped Grassley from voting to perpetuate partisanship at the expense of protecting the rights of the minority party: Grassley was among the Republicans who voted to go nuclear over Neil Gorsuch, eviscerating the judicial filibuster. Immense pressure from outside groups and the leadership of his own party no doubt weighed heavily on his decision, as it did with many Republicans who voted for the plan anyway, despite apprehension. Senator John McCain, for example, referred to the nuclear option proceedings as a “dark day” in Senate history even though he voted in favor of the measure. Although Grassley has expressed his unwillingness to remove blue slips, it’s not difficult to imagine a situation akin to the Gorsuch nomination that generates enough pressure from leadership and outsiders for the Senator to move against the convention.

Senate Republicans should possess the political foresight to understand that getting rid of blue slips would only hurt them in the long run, just like using the nuclear option for the first time in 2013 backfired on Democrats. After all, capricious political winds bite both ways.

There are many reasons to be optimistic about global poverty rates. As The Economist reported, “between 1990 and 2010, [the number of people in poverty] fell by half as a share of the total population in developing countries, from 43% to 21%—a reduction of almost 1 billion people.” As part of its Global Goals, The United Nations is hoping to continue this auspicious trend, announcing its intent to “eradicate extreme poverty for all people, everywhere” (measured as those living on less than $1.25 a day) by 2030.

Despite the UN’s abounding optimism and the laudable courage and initiative of humanitarian workers, there are serious challenges facing the future of foreign aid. In the past, the ideal recipients of foreign aid have been poor countries with stable governments. For example, aid for countries like Bangladesh and Senegal is generally successful because they are governed well enough to reduce fears of mismanagement and waste. But the number of countries like Bangladesh and Senegal is decreasing.

Aid today is most badly needed in fragile states, where governments are dysfunctional and often oppressive. The Economist reported that these countries comprised about a third of the world’s poorest people in 2010. Today, that number has reached about a half. Fragile states are not only a problem because of their greater poverty rates; they are often geo-political rogues that pose serious threats to the stability and security of their regions.

Given these high stakes, many international organizations are shifting their attention towards fragile states. The United Kingdom’s Department for International Development is planning to spend half of its budget on fragile states, while exhorting others to do the same. Perhaps more significantly, the World Bank is planning to double the amount of money it sends to fragile states over the next three years to $14 billion.

Aid today is most badly needed in fragile states, where governments are dysfunctional and often oppressive.

Though this global initiative is needed, it is also highly risky and costly. Donors can harm the already unstable governments of fragile states by setting up competing welfare systems that undermine domestic bureaucrats. This can rupture the social contract between citizens and their government.

Of course, there are good reasons to want to bypass the government of a fragile state. Donors and investors understandably worry that their altruism will be redirected for nefarious reasons if their money is left to a corrupt and brutal government that uses the donations to boost military spending and consolidate power.

And yet, in bypassing the corrupt governments of fragile states, aid efforts can further estrange people from their government, creating a new, perverse dependency on a foreign source. This can rupture the social contract by disincentivizing citizens to pay taxes, further weakening the government. Although, as researcher Robert Blair said in an interview, it is ambiguous if this actually happens, the long-term effects of bypassing the government entirely is something aid workers worry about.

These pernicious effects are particularly true of food aid and goods transfers. Although it may look good in NGO photo albums, direct transfers of goods are costly, wasteful, and likely to crowd out domestic producers, undermining the local economy. Despite the rough consensus that providing aid in “goods in kind” is ultimately harmful because of the “crowding out” effect, Blair points out that context matters: it depends on whether or not there are actually people looking to sell the goods that would be sent as transfers. That is, aid in the form of donated clothing will not crowd out the local economy if there are no domestic businesses looking to sell clothing.  

Most importantly, however, direct transfers are often simply resold to pay for what people actually need. It’s far better just to give people money. There is an emerging consensus that simply providing cash transfers can help reduce poverty and allow people to use the money for what they really need.

But leaders in the fight against poverty in fragile states are looking to go far beyond this. Former UK Prime Minister David Cameron, who recently came to Brown, described his new job as the chair of the LSE-Oxford fragility commission in a Q&A. The commission’s goals include generating “private sector development”, “effective state capacity”, and legitimate government.  

In pursuing the former goal of private sector development, the commission has considered the role of investments in promoting incentives for long lasting engagement from both the supply side and demand side. The rationale behind it is simple: people are likely to stick with their investment until they get some return on it or otherwise realize there’s no hope of getting the investment back.

But there is a dark side to this free market approach. Since investors are more likely to put money in places where they are expecting some return, these investments can often come in the form of concessions in natural resources—for instance through an investment in a rubber tapping company. Though some may argue this still may boost the economies of fragile states, investors are often more self-interested than altruistic.

It’s true that investors will often put their money in a fragile state’s most important sectors to help ensure their investment. Moreover, their work may often accompany additional investments in roads and ports that are necessary to, say, mining ore. Unfortunately, this infrastructure is often entirely unhelpful for the local population and does little to address their real needs. These extortionary practices eliminate any benefit of investment aid.

To safeguard against these issues, the UN will often encourage fragile states to include clauses that force investors to invest in local public goods. As Blair explained, this was true in the case of Liberia, which historically would sign very generous concessions with multinational firms so that they could tap rubber. When China Union came to mine iron, Liberia included a clause forcing the firm to also rehabilitate the schools, build highways, and institute electricity. This seemed like an auspicious and mutually beneficial agreement.

But China Union never fulfilled any of these promises. Although it’s true that the Ebola Crisis and the mine’s poor returns gave China Union reservations, Liberia simply did not have the governmental strength to enforce the agreement. This is a widespread challenge that fragile states will continue to face.

The issues facing investments-based solutions to eliminating poverty in fragile states goes far beyond this. Even NGO’s and the altruistic investors they help cultivate are wary of pouring money into fragile countries. Although the potentials for growth and system of incentives are seemingly good, corruption, mismanagement, and oppressive governance endanger these positive prospects. A series of debilitatingly difficult questions arise from this: What happens if the investor doesn’t get their money back? If productivity benchmarks are set, how would they remain consistent when the economy is in flux? And how would an investment agreement even be enforced?

A small business receiving money from foreign investors could agree to an investment plan contingent on hitting, say, monthly goals and agree to intense supervision. But this would potentially limit the ability of the company to make any radical changes to its business that might be necessary. Moreover, the high risk of investing in businesses in fragile countries implies a higher interest rate, which may deter people from actually looking to investment altogether. For these reasons, the challenges facing investments to fragile states—as an alternative to cash transfers—are great.

Even if cash transfers were increased and used exclusively, it would not solve the structural, institutional failures of fragile states. Aid efforts consequently ought to work both to reduce poverty and empower people while simultaneously strengthening institutions. Investing in governmental agencies themselves can be a helpful approach. By pouring money into public health ministries, donors can ensure that the most basic logistical needs are met. Although it’s uncommon to see foreign donors do this, the UN has attempted these practices with some success.

But such forms of aid can also pose a moral conundrum. After all, many fragile states are run by oppressive, brutal governments that we could not possibly support in good faith. Despite this, global aid efforts would likely benefit from engaging with these regimes more than they do. Simply bypassing them can pose long-term consequences far direr than fears of short-run support.

It’s hard to find a good solution to foreign aid in fragile states without changing the very institutions and means of government. The weak and corrupt regimes that lead these countries pose a tantalizingly hard limit on the ability of international aid to reduce poverty. But it is important to remain optimistic about the effort to reduce poverty in fragile states. International aid efforts often have unfortunate, unintended consequences; however, they are often preferable to more austere alternatives.

And although no solution to combating poverty in fragile states is perfect, new solutions may soon emerge. With such dire consequences of leaving poverty in fragile nations unaddressed, there is no reason not to try.


Dr. Rui Maria de Araújo is the prime minister of Timor-Leste, a position he has held since 2015. A physician by training, he served as both the Minister of Health from 2001-2006 and the Deputy Prime Minister from 2006-2007. 


What is something most people do not know or are surprised to learn about Timor-Leste?


People are surprised to learn that we became independent and went through [the] process of nation-building and state-building. We are a vibrant democracy. People think: “Oh okay, we thought you were on and off in conflict.” That’s something that when you have conversations with people, people feel a bit surprised that we have come so far.


What lessons and skills from being a doctor do you bring to your role as prime minister?


I’m a medical doctor, but also did post-graduate studies in public health, focusing on health policy, management, and financing. I think one important thing that I bring in from my profession as a medical doctor is that you make decisions on the basis of evidence. When you face a patient, you go through all the evidence, make a diagnosis, and then start the treatment. Policy-making is more or less the same. Of course, it’s not as simple when it comes to public policy, but the principle of using evidence to assess policy options that are available and then [making] a final decision on which course we should be taking, to me, [they] are the same.

How do you plan to address systemic poverty in Timor-Leste, along with related problems such as malnourishment and low life expectancy?


We have a Strategic Development Plan guiding the overall development of our country. When we restored independence [in 2002], we had [a] National Development Plan, but five years down the line, we reviewed it, then [created the] 20-year Strategic Development Plan [to be in place] from 2011-2030. It has four main components: focusing on human capital, basic infrastructure development, institutions, and enabling economic development. Within that framework, our focus is to diversify the economy of our country, so that young people get more jobs, get more opportunities to be educated, enter the market, and become more active in our economic development.


Now, so far, most of the economic development in the country is driven by public expenditure. Private investment is still very low. From 2009 up until now, we’ve – in terms of public spending – spent up to $7 billion. It’s a small country of 1.2 million. We spent that much on our basic infrastructure, social programs, health, education, agriculture, and so on. The latest figure shows that there [has been] some good progress. Life expectancy has increased. Infant mortality rates have gone down. Poverty has been reduced, despite the fact that it is still high. But progress is seen in the pace of reduction [of the poverty rate]. More and more people are getting into schools. More and more people are getting jobs – despite very limited jobs since the private sector hasn’t come in in full force yet. The next five to ten years [will] focus on economic force, particularly in the areas of tourism, agriculture, fisheries, and basic manufacturing, in order for us to diversify our economy and get more job opportunities.
What is Timor-Leste currently doing to improve conditions for refugees and immigrants coming into the country, and what lesson can other countries learn from Timor-Leste’s handling of its historic refugee crises?


Well I think I’ll start by saying that in 1999, we had the experience of managing internally displaced people. Some of our people migrated in 2006, and, to solve the problem of internally displaced people, the government took control of the process, while the UN agencies were complementing [the government’s efforts]. So that experience also led us to lead a group called g7+, which is active in many conflict countries.  In the context of advocating for better coordination amongst the agencies and countries…I think the principle is that it should be country-led, meaning if it is a problem of the Central African Republic, the authorities there should be the ones leading the process and all the agencies [should] support that process.




Pirates — those ancient swashbucklers or their contemporary illegal-downloading counterparts — seldom conjure images of political savvy or engagement. Iceland, however, is a different case. The island nation in the north Atlantic has seen its political landscape altered by self-styled “pirates.” This transformation hasn’t erupted from smash-and-grab stunts or insurrection; these “pirates,” rather, are members of an upstart political party, aptly named the Pirate Party, which recently tripled its membership in the Icelandic Parliament. By achieving mainstream political success, the Pirates have distinguished themselves from other movements with distinct, populist roots. And unlike their forebears, the Pirates will be able to directly bring grassroots concerns to a national legislative body. This ability marks the Pirates as a unique exception to the landscape of contemporary populist movements. Though the Bernie Sanders campaign — and the Occupy movement before it — captured and channeled powerful anti-establishment frustrations, such movements were unable to secure broader support or success. Those movements may have altered the liberal landscape, but ultimately their bark lacked a truly meaningful bite. The Pirate Party’s success, however, is both bark and bite; it represents a dissolution of distance between the electorate and the political elite. But it is not without unique challenges, namely: Can the party stay true to its ideals while inhabiting the very stations of power and influence that it set out to reimagine?

The answer to this question lies, at least in part, in the biographical details of the Pirate Party. For starters, the Icelandic Pirate Party, formed in the fall of 2012 by Birgitta Jónsdóttir, a former Wikileaks activist, is actually an offshoot of an identically named party that started in Sweden in 2006. The Swedish Pirates initially focused on combating European copyright laws, which Jónsdóttir has called “draconian.” They rejected the laws by citing their perceived inflexibility. They considered them both arbitrarily different, even within the European Union, and claimed they did little to protect “the rights of the public.” A little less than a decade later, it seems as though the Party has been successful. In 2015, a representative from the German branch of the Pirates was selected to lead a revision process for European copyright laws.

This seminal episode, despite its humbleness, speaks to the ideological stances that continue to define the core of the Pirate Party’s platform. In the words of Jónsdóttir, their platform is singularly focused on advocating for “civilian’s rights.” The seeming nebulousness of this term is not an accident. Rather, it reflects the increasing relevance of discourse related to the interactions between technology, government, and personal privacy. Importantly, this notion of civilian’s rights has served as more than lip service for the party. The translation of a philosophical stance into an actionable reality, best exemplified by the 2015 revision, bolsters the bravado of the Party’s stance. In doing so, it places the Pirates’ ultimate aim of “moderni[zing] how we make laws” within the realm of political possibility. Further, it demonstrates, in part, that the Party is capable of both inhabiting legislative power and reforming it.

The Pirate Party’s success is both bark and bite; it represents a dissolution of distance between the electorate and the political elite.

The Pirates have also offered a rather concrete vision of what this “modernization” process might entail. In simplest terms, it would be a return to direct democracy. This measure, they hope, would do more than solely redirect political power to Iceland’s citizens. If properly implemented, it would increase governmental transparency while encouraging private engagement with, or interest in, political affairs. Further unique to the Pirate Party is its dearth of hard and fast policy positions. On questions like Iceland’s potential admission to the EU, for example, the party would put its money where its mouth is and let the Icelandic people decide via referendum.

These measures are far from a panacea, however. One need not look further than the United Kingdom to see the potential pitfalls of leaving issues of national importance in the hands of the electorate. Jónsdóttir acknowledges the potential for misinformation to taint the democratic process and would call for an “informed campaign” to adequately spell out the pros and cons of a given referendum. Yet, it is not clear what an “informed campaign” might entail. Ultimately, this exemplifies larger concerns regarding how the Pirate Party would implement its vision for Iceland or how it would potentially govern without explicit policy positions.

These concerns have done little to detract from the party’s popular appeal. Indeed, the party’s recent gains in the legislature suggest that an attitude of suspicion towards the traditional political elite outweighs vague concerns regarding implementation or practicality. In that regard, it is not without cause that Iceland has been home to such popular support for the Pirates. The past decade has been littered with episodes that have likely undermined popular faith in both government and big business. In 2008, the country was rocked by a financial crisis, after three major banks — together ten times the size of the national economy — collapsed. Iceland did stage a remarkable recovery, with GDP reaching surpassing pre-collapse levels in 2014, but frustrations still lingered as the Parliament neglected to ratify a new constitution that garnered 67 percent approval in a national referendum. And in April of this year, Prime Minister Sigmundur David Gunnlaugsson resigned after documents that were part of the Panama Papers release indicated he might have harbored a conflict of interest.

Ultimately, the electoral success of the Pirate Party reflects Iceland’s shifting sociopolitical clime. However, the disillusionment with career politicians, traditional political parties, and ineffectual rule that catalyzed this change will likely dominate election outcomes in continental Europe and farther afield.

That being said, Iceland’s Pirate Party may still be an exception, rather than a rule. The island nation is home to a particular blend of technological savvy, political openness and optimism, and a history of ineffectual governance that provides opportunities for upstart political movements. These factors likely contribute to the broader success of the Pirate Party, especially when compared to American counterparts like the Sanders campaign. But, despite this optimal environment, the Pirates still only hold about a sixth of the Icelandic Parliament. The burden of representing populist ideals when trying to bridge political divides is, seemingly, a challenge on either side of the Atlantic. Time will tell if the idealism and radical stances that define the party will continue to flourish within the confines of the legislature. Regardless of that outcome, the rise of the Pirate Party is a powerful example of how popular politics can land in the national arena.


While the proposition to legalize marijuana has taken up the most oxygen of Massachusetts’ ballot initiatives, another issue is just as consequential: charter schools. Question 2, if approved, would expand existing charter schools and/or authorize the creation of up to twelve additional ones. Earlier this year, the measure seemed poised to pass, with one poll in March finding nearly 75 percent support. But over the past few months, several prominent political figures in Massachusetts – including Senator Elizabeth Warren and Boston Mayor Marty Walsh – have come out in opposition to Question 2 by raising concerns about how it would reallocate money intended for other public schools. Voters have been receptive as a recent WBUR poll found only 41 percent of respondents now support the measure, with 52 percent opposed.

Although issues of funding allocation do merit public consideration, the problems with charter schools run deeper than just who gets what money; citizens should also be concerned with how charter schools spend the money they do receive. As publicly-funded schools, they should be held to the same educational standards as public schools. However, charter schools across the United States often fail to pull their own weight, especially in the realm of special education, as they take in a smaller proportion of special needs students than public schools. This practice is unfair to the students, parents, and taxpayers of school districts everywhere: special needs students should retain the ability to attend charter schools as they please – just like everybody else – and public schools shouldn’t have to disproportionately support special education programs while simultaneously losing funding to charter schools.

Though privately run, charter schools are still considered “public” because their funding comes from the education budgets of the cities and towns they serve. Parents may choose to send their child to a charter school as an alternative to their district’s standard public school, and funding is allocated on a per-student basis. Specific details vary by state, but typically charter schools receive the same amount per student as would have been spent on that student in their regular public school district. Some charter schools also receive private donations and funding, which are not always publicly disclosed, as charter schools are not always forthcoming with making their budgets public.

Technically, charter schools are “open to all children” who desire to attend and must take in all who apply, but if the amount of students applying exceeds the capacity of the school, most charter schools claim to employ a random lottery system. However, this is where the process gets murky. A Washington Post fact-checking report and analysis of charter schools claimed that there is “no empirical evidence” to support the National Alliance of Public Charter Schools’ claim that charter schools are “generally required to take all students who want to attend.” Furthermore, the Post’s piece illustrates how some charter schools use admission tests and other push-out techniques to avoid taking in low-performance students.

Charter schools across the United States often fail to pull their own weight, especially in the realm of special education, as they take in a smaller proportion of special needs students than public schools.

These exclusionary practices conflict with the public aspect of charter schools. Unlike public schools, where all students are provided with access and potential to succeed, charter schools often determine which students will have the opportunity to fulfill their potentials and, more importantly, which will not. While these schools, intentionally or not, have excluded low-performing students from participating in a charter school experiment – a deplorable but understandable practice, a more sinister exclusionary operation exists with respect to special needs students. Many charter schools take in a disproportionately low amount of students in need of special education. In Massachusetts, charter schools have consistently taught a lower proportion of special needs students (usually around 75-80 percent of the total) than state public schools in each of the past eleven years; in Los Angeles from 2013-2014, the percentage of students with severe disabilities at public schools was more than three times higher than that at local charter schools. If charter schools wish to retain their public funding, which would otherwise go to schools that cannot and do not discriminate against special needs students, they should be held to the same expectations as those public schools. In order to combat this systematic discrimination, however, charter schools must first become more transparent about their finances and operations. Without taking care of the underlying issue of concealed business practices, any legislative action can be worked around surreptitiously; for example, even if states implement a mandatory minimum percentage of special needs students for charter schools, without oversight they can continue to selectively accept only the highest-functioning of the special needs students who apply. Moreover, if charter schools are going to receive public (and private) funding, they should have to disclose their finances so that school districts can ensure that proper funding is being directed towards special education.

A recent episode of Last Week Tonight explained that many charter schools are overseen by amorphous education management organizations (EMOs) – private companies specializing in education. Unlike public entities (like the school themselves), these privately-held companies can solicit private funding and donations and are not legally bound to release their finances, a provision that ultimately conceals the exact spending practices of many charter schools and belies their public attributes. Moreover, most charter schools are reluctant or even non-responsive to requests for information about their contracts with their EMOs; in its fact-check, the Washington Post sent Freedom of Information Act requests to more than 400 charter schools, and only 20 percent responded with the requested contract information.

Even if charter schools are eventually forced to become more financially transparent, that’s only a first step. State governments also need to reform the way these schools are allowed to operate. (It’s up to state governments because education varies too much between states for federal action to be reasonable or enforceable, and relying on local governments would be tricky given that many charter schools serve multiple municipalities.) Most importantly, there needs to be more independent oversight regarding which students get to attend charter schools; otherwise, the schools would still have the ability to exclude special needs students. Making the lottery process more transparent or perhaps even having the school district run it (rather than the charter school itself) would significantly reduce the potential for discriminatory selection practices. Potentially and perhaps more drastically, state legislatures could offer charter schools an ultimatum: either match the proportion of special needs students in their student bodies to that of other local public schools, or significantly cut their funding. None of these measures seems forthcoming; despite advocacy groups and baseline statistics that indicate a lower rate of disabled students at charter schools, few politicians have even taken notice. In some states like Pennsylvania, government officials have been outspoken about how charter schools drain public funding, but the discrimination angle remains largely untouched by legislators.

The same can be said of Massachusetts where many of those opposed to Question 2 have emphasized the detrimental effects such expansion will have on public schools, but no one has started making steps toward anti-discrimination as of yet. Whether or not Massachusetts voters approve the expansion of their state’s charter schools, the charter school system clearly could use some re-examination. Even if most charter schools don’t deliberately discriminate against students requiring special education, just having the capacity to be able to do so is problematic enough. Considering that charter school funding could be directed toward improving special education at schools that don’t have the ability of self-selection, it’s crucial to make sure that money is being spent fairly and responsibly. It is up to the lawmakers of Massachusetts and other states to recognize these issues and right the wrongs charter schools continue to commit.


Lobbying is loosely defined by each state as “an attempt to influence government action,” and in the eyes of many, the industry is highly untrustworthy. In a 2011 Gallup poll, 71 percent of Americans said they believed lobbyists have too much power and influence within our government. But regardless of prevailing sentiments toward lobbyists, their work unfortunately remains completely legal; they are hired by private companies, which are entitled to spend their money at their own discretion.

While private companies’ use of lobbyists is widely accepted, government agencies tread into an ethical and political gray area when they hire private lobbyists and use taxpayer dollars. And yet, it happens all the time: the government, especially boards and agencies at the state and municipal level across the nation, hires private lobbying firms to advocate on their behalf to other parts of the government. This practice of tax-funded entities’ using public funds to hire private lobbying services is neither ethical nor right, as the taxpayers who subsidize these shady efforts command no say for which policies their money is used to lobby. Additionally, this continued usage creates and exacerbates communication struggles within government.

Given the potential pitfalls of this policy, it seems surprising that only ten states have statutes that prohibit the use of public money by government entities for lobbying services. That leaves 40 states and their various boards, districts, and agencies to spend taxpayer money as they see fit. In many of those states, lobbying constitutes no small portion of the budget: Data from the office of the California Secretary of State indicated that local government entities spent $110,153,550 on lobbying services from 2013 to 2015. In Texas, the state’s Ethics Commission found that $29 million dollars were spent by publicly-funded entities on lobbying the state legislature. These are neither isolated incidents nor exceptions to the rule; the Show-Me Institute, a Missouri-based think tank, estimated that about $2.7 million of taxpayer money was used for lobbying by the state of Missouri in 2012. Prior to the executive order that ended publicly funded lobbying in Arizona, an estimated $1 million dollars was being spent toward this practice annually. There exists no discernible pattern as to what policies are most often lobbied for with this money, nor which agencies or departments do so the most; this practice spans governments – from education boards to water districts.

How are these lesser funded or smaller municipalities and agencies meant to compete in an arena in which the success of policies and initiatives is tied to the amount of taxpayer money that can be spent on lobbying?

Allowing the continuation of publicly funded lobbying can have adverse effects within the government as well. It can inadvertently lead to an arms race of lobbying between different parts of the government and between different cities and municipalities and can exacerbate inequality of resources and power. If taxpayer-funded lobbying is viewed as a valid tool for success, then public agencies and commissions have incentives to continue spending more and more often in order to lobby for their personal interests instead of for those in the best interest of the people. If one municipality or board competes with another, the resulting spending spree – at the taxpayer’s expense – could be vast, as these organizations hire additional private lobbyists to vie for the resources.

If the practice of using taxpayer-funded lobbying is ever found empirically ineffective, then the case that it is wasting taxpayer money grows even stronger. If ever proven effective, it still poses a set of issues and threats, especially to poorer governments and entities that have fewer resources at their disposal. How are these lesser-funded or smaller municipalities and agencies meant to compete in an arena in which the success of policies and initiatives is tied to the amount of taxpayer money that can be spent on lobbying? Accepting a culture of publicly funded lobbying would certainly hurt and reduce the efficacy of governmental entities that don’t have substantial tax revenue from which to draw.

Finally, whereas private corporations might discontinue unsuccessful lobbying efforts in order to save their own money, public entities are not bound by a similar financial constraint. Tax revenue will accumulate regardless of lobbying outcome, and therefore the funding for inter-governmental lobbying is seemingly infinite. The little to no oversight of the operating procedures of many of the government-independent boards and commissions allows them to pursue this practice with virtual impunity.

When all is said and done, this practice is a gross misuse of public funds that betrays taxpayers and the institutions and ideals on which this country was founded. Lobbying has already invaded government and policy-making through the private sector, as unfortunately is its prerogative, but it has no place interfering with public affairs and public funds. Several states have paved the way for enacting policy that regulates and ends this wasteful policy, but there is still a long way to go. Many states such as Texas have tried passing similar laws in an effort to protect taxpayer dollars and advocate for the proper use of such funds, but they have not succeeded. Unsurprisingly, these efforts are met with strong opposition from lobbyists, who work hard to block legislative reforms that might reduce their own business opportunities within the government. The battle of lobbyists and special interests versus reform will continue next year, as lawmakers in Texas and other states have already proposed new legislation to put an end to this practice.

The misuse and lack of oversight of public funds have no place in the public works of our government. The time has come to put an end to governmental entities using public money and taxpayer dollars in order to advance their own agendas and interests. As long as this lobbying is allowed to continue, it will be done at the expense of the taxpayer and average citizen, whose money is being misused and whose voice is being stifled as private lobbyists line their pockets to advance special interests.


The first round of elections for the Majlis, Iran’s national parliament, changed the dynamic for political reformists throughout the country. 83 candidates from The List of Hope — an informal coalition of moderate and reformist candidates — claimed victory in the first of round of elections, an increase of over 50 representatives for the party. One particular characteristic of some candidates quickly garnered domestic and international attention: 14 female politicians secured seats in the Majlis, and seven more will be in the second round of elections — or runoff vote — in April. Immediate reactions to this outcome, particularly in Western media, sparked headlines that framed the results as either a dramatic victory for women, as well as the List of Hope in general, or an insignificant event within the larger landscape of the semi-autocratic Iranian government.

Yet, both sides of the political coverage missed the true significance of the election. The outcome represented over a 50 percent increase in female representation and also reminded Iranians of the continued structural limitations of their own government. While the increase in female Members of Parliament (MPs) does not reflect the Iranian population’s prioritization of female representation, these women weren’t elected by accident. The victories of Iran’s women, moderates, and reformists in the recent elections need to be analyzed together to fully understand their implications. The elections do not paint a definitive positive or negative picture of gender relations in government. What they do establish, however, are conditions for substantial reform in the coming years. And this time, reforms may actively involve women in government. These developments suggest that these new conditions may spark meaningful reforms in the coming years.

There’s been a historical discontinuity between the rights of women in Iran and their participation in government. While there has been a steady increase in women’s access to education and employment, these advances have not translated into positions in political office. Despite President Hassan Rouhani’s reformist agenda and public rhetoric encouraging more women to sit in government positions, he did not appoint a single woman as a cabinet minister upon his election. The few women who were in parliament faced the constant challenge of being regarded as “ornaments” as opposed to serious politicians and often voted against their own interests.

While the election results have significantly increased the number of female members of parliament, these gains have already faced pushback from conservatives, many of whom are women. Notably, Fatemeh Alia, a conservative MP, lost reelection when she supported a law to ban women from viewing a volleyball match live, saying that it was a woman’s place to “stay at home.” Occurrences like this are not new: The few women who have made it into the Majlis in the past have mainly avoided or even worked against progress in the field of women’s rights. During President Rouhani’s regime, female MPs have had trouble initiating reform, suggesting that representation does not automatically spark change, or even a desire for it. Thus, the significance of the 14 female representatives’ victories, for women and for Iran as a whole, needs to be accompanied with opportunities for women to play meaningful roles in politics. As more women are elected and reformist parties gain political traction, that opportunity may have arrived. The public has voted female MPs across the country not only because they have voiced feminist policies, but also because they have campaigned on substantive and convincing reform measures.

The institutional prejudices against women in Iranian politics have been in place for many years, and can only be changed with cultural and demographic changes, both of which have been brought about by reformists.

Seyedeh Fatemeh Hosseini, a PhD candidate at the University of Tehran and youngest addition to the Majlis, embodies the successful combination of a female politician whose policy focus goes beyond just gender issues. As a member of the List of Hope, she campaigned with substantive views on global economic integration and an increased attention to the needs of the next generation of Iranians, of which she counts herself a member. Her classification as part of the youth vote has propelled her political career and helped earn her considerable support. Hosseini’s victory suggests that the increase in support for female politicians may be driven not solely by changed attitudes and institutional prejudices against women, but instead by a cultural and demographic change that coincides with the reformist movement.

Reformist and moderate MPs predominantly based their campaigns on the grand strategic plan known as Vision 2025. Both ends of Iran’s political spectrum hope to establish the nation as a regional power, but the reformists’ goal prioritizes a knowledge-based society, increasingly involved in international political, economic, and cultural forums. Vision 2025 is grounded by foreign investment in Iran’s people; supporters of the plan hope to establish regional dominance by educating, equipping, and motivating the population to compete at an international level. President Rouhani is already encouraging a corresponding increase in global investment, especially in information and communications technology. Through that policy, it’s no surprise that female candidates supporting a policy platform focused on education, jobs, and future economic growth attracted the youth vote.

Those goals have led Iranians to not only set up necessary conditions for reform, but to do so within a system that is still checked by remaining autocratic authorities, namely the Assembly of Experts and Supreme Leader Ayatollah Khamenei. Beyond the policies and rhetoric of these figures, women have faced even greater restraints on their participation in government from the Guardian Council, the authoritative and religious body of six jurists and six theologians that’s often considered to be the single-most influential body in the government. Embodying some of the greatest challenges to Iranian democracy, the Council has repeatedly disqualified women from running based on their interpretations of Islamic Law. The public’s ability to navigate the Council this February and place reformists and women in parliament indicates that, contrary to some popular belief, Iranian elections present a genuine opportunity of reform.

The runoff elections on April 29, 2016 will finalize the composition of the Majlis and have key impacts on how those reforms develop into substantial policies. Seven more female candidates may win office in districts across the country that failed to elect an MP with over 25 percent of the vote during the first round of elections. By nature, the runoff voting exhibits the diversity of Iranian political views. They particularly highlight the continued influence of conservative and hardline members of parliament and as the obstacles presented by the public as well as the government. Yet, the runoff elections also show the strength of Iran’s political process, because despite the opposing political and economic visions that divide these candidates, Iran’s limited democracy is becoming increasingly fair.

Going beyond its own borders, women’s roles in reforms may be crucial in shaping international responses to reformist policies. The List of Hope’s political, economic, and cultural initiatives are fundamentally tied to the international community. Iran’s inclusion in global economic and diplomatic forums depends on the willingness of the international community just as much as it depends on political will at home. As the List of Hope attempts to modernize Iran’s role in the world, it must close the gap between how the Iranian people envision their future and how international media often portrays the nation’s goals. Through this political and cultural shift, Iran may hope to demonstrate how a large, Islamic democracy in the Middle East can serve as a model for others in the region, similar to Turkey and Mauritius’ hopes for their former female heads of government.

It’s because of this political climate that the role of women in Iran’s government may hold the key to shaping the new reformist vision. Women and reform are tied beyond the proposed policies of the List of Hope; Iran’s female MPs may operate as a lens through which the international community views the state. If this political trend continues, and the newly elected officials assume a substantive role in reforms, Iran’s international image will be drastically closer to its desired identity. As in any democracy, reforms still need to be made, and the List of Hope appears poised to give it their best shot. While this isn’t the first time Iran has been on the brink of significant change, the presence and potential leadership of Iran’s women suggests that this time might be different. Only time will tell if Iran’s political system and policy dynamics will shift in favor of women.

Infographic by Quinn Schoen

For the last three decades, anti-debt polemics have been the cause célèbre of self-styled “fiscal conservatives.” Over the course of their crusade, deficit hawks have cultivated several strategies to reduce government borrowing and spending. They have tried everything from emotional appeals that invoke scary, big-sounding factoids to more serious econometric studies. Yet all of these approaches fall short upon closer examination. Contrary to the doomsayers, the national debt is not a national emergency—at least, there’s no reason to see it as such in the near or even intermediate future. In fact, austerity is probably the most ‘fiscally irresponsible’ move for the US at present.

It’s quite clear that cutting the national debt has become an article of faith for conservatives. Consider the words of Stephen Moore, the former chief economist at the Heritage Foundation: “In 2015 the US government ran up one of the largest budget deficits in history — borrowing more than $1 billion a day seven days a week and twice on Sunday.” With this folksy statistic as his only evidence, Moore proceeds to advocate for a decades-long regimen of budget-cutting. In his view, the goal of fiscal policy should be to aggressively reduce the national debt over the next two and a half decades until the “debt burden [is] down to … a safe zone.”

Here the intelligent reader should pause and demand elaboration. Why is $8 billion per week too much? It sounds big to the layman’s ears, but government spending always “sounds big.” Furthermore, even if this $8 billion figure does need to be trimmed, how do we go about identifying and justifying a “safe zone” for national borrowing? At no point are either of these claims fully explored as yardsticks for America’s carrying capacity for debt, and yet they are some of the most commonplace fiscal fallacies. The former — framing the debt in terrifying but irrelevant terms—comes in several forms: towering visuals of stacked dollar bills, evocative memes, and analogies that put the national debt in personal terms. These tactics are certainly riveting, but only because they provoke anxiety rather than sober-minded analysis. Unfortunately, such misleading devices are the most frequent and widely-believed ways of talking about the national debt.

Although Republicans are the primary culprits behind intimidating debt-related messaging, Democrats are guilty too. In a 1993 address before a joint session of Congress, Bill Clinton warned that “if our national debt were stacked in thousand-dollar bills, the stack would… reach 267 miles.” The total effect of all these vivid devices and depictions is to create wide-eyed, panicked voters that will back spending cuts. Ideally, people would instead question the macroeconomic relevance of “big” versus “bigger” stacks of dollars into space before they cast their ballots. After all, the average person’s impression of what’s large cannot answer the econometric questions surrounding debt sustainability. But given the state of modern political discourse vis-à-vis US government borrowing, it’s clear that many pundits have a vested interest in debasing the conversation with pathos.

Not all talk of America’s national balance sheet entails this sort of rhetorical flourish, however. There is also a second way of talking about the national debt — using the debt-to-GDP ratio — which has at least the veneer of respectability. The intuitive logic behind this metric is that GDP is a country’s income, and therefore, represents its ability to pay off debt. When debt grows faster than GDP, liabilities begin to outstrip income and the country gets closer to the brink of insolvency.

Of course, it’s not as simple as just stating a ratio: Facts without explanations are meaningless. Moore commits this oversight when he flatly asserts that America’s debt-to-GDP ratio is too high, but fails to elaborate on several crucial points. First, what determines the magic number for debt-to-GDP (Moore’s baseless claim is 50%)? Why exactly are current levels flirting with calamity, if they are at all? Economists struggle with the answer, partially because it is far from clear that there is such a universal “critical point.” An oft-cited 2010 paper on the topic by Carmen Reinhart and Kenneth Rogoff finds that gross debt starts to threaten growth when it reaches 90 percent of GDP. While alarmists have seized upon this figure, subsequent research has cast doubt on a hard-and-fast rule for government spending. Economists at the University of Massachusetts Amherst “replicate[d] Reinhart and Rogoff … and [found] that coding errors, selective exclusion of available data, and unconventional weighting of summary statistics lead to serious errors that inaccurately represent the relationship between public debt and GDP growth among 20 advanced economies in the post-war period.” Multiple other groups of researchers have joined the salvo, and no consensus exists as to when public debt actually begins to cripple economic activity. It is therefore difficult to aim for a safe zone that academic economists cannot identify and which might not be necessary.

Recent events have further belied rather than reaffirmed truisms about debt-to-GDP metrics. The countries that befell fiscal distress in the Eurozone crisis held gross debt that ranged from 40 percent to 110 percent of GDP before panic set in. Such a wide range does not neatly lend itself to meaningful lessons on proper debt levels. Additionally, some middle-of-the-pack countries like Germany remained oases of stability as others became sources of contagion—despite the fact that Germany’s debt-to-GDP ratio was 75% in 2009, 10 points higher than that of troubled Spain and 20 higher than that of distressed Ireland. Hence it’s not clear that austerity is a good or even necessary economic decision when debt-to-GDP ratios are seemingly high. That’s because austerity can cause recessions, and avoiding an uptick in debt by incurring economic harm is often a bad deal. Japan demonstrates that austerity need not be the go-to decision at even high debt levels; the country’s gross national debt is 243 percent of GDP, but with low interest rates, this value is still manageable and necessary. No one is fretting about a surprise Japanese default, and government spending helps to mitigate Japan’s ongoing economic weakness. In terms of creditworthiness, US government bonds are still incredibly reliable, maintaining an AA+ credit rating — even with current gross debt levels at around 100 percent.

While austerity might not be an urgent prescription for the US, there is a case to be made that current borrowing will force an eventual reckoning. The theory holds that government expenditures are unsustainable and will cause problems decades down the road. Corresponding cuts are needed at present in order to fix or forestall this eventuality. The Congressional Budget Office raised these concerns in a 2015 report on long-run fiscal trends, contending that starting around 2020, “debt [will] be on an upward path relative to the size of the economy. Consequently, the policy changes needed to reduce debt to any given amount would become larger and larger over time. The rising debt [cannot] be sustained indefinitely; the government’s creditors [will] eventually begin to doubt its ability to cut spending or raise revenues by enough to pay its debt obligations, forcing the government to pay much higher interest rates to borrow money.” This argument therefore warns that the government’s rising financial burden from debt will eventually outpace the growth of the nation’s economy.

The problem with this analysis is that predicting the macro economy—and the government revenue it provides—decades in the future is the social science equivalent of reading tea leaves. Nobel Prize Economist Paul Krugman has called such estimates an “especially boring genre of science fiction” due to their high variability, and Jared Bernstein, formerly a top economic advisor in the Obama Administration, writes that predictive economics fail beyond a 10-year horizon. If year-on-year growth ends up just a fraction of a percent higher than expected, the debt would be a nonissue. Alternatively, if the world is hit with another large recession, fiscal crises could erupt. While no one knows what will happen years from now (who predicted the Great Recession?), we can be sure that austerity today will harm employment and the economy at large. Does it really make sense to act on an uncertain and likely flawed prediction of fiscal health, especially when such action will cause almost certain economic stagnation in the present?

Perhaps the largest problem in using the debt-to-GDP ratio to justify spending cuts is that it’s an incomplete snapshot. While this ratio does invoke national income and indebtedness, it neglects a crucial variable: interest. The extent to which a country can pay creditors without falling into arrears is highly sensitive to interest rates. If it is the case that these rates are extremely low, borrowing is cheaper, since interest is what the government pays for the privilege of borrowing. Therefore, a drop in rates — say, due to the Fed’s response to an economic crisis — has the practical effect of mitigating the government’s financial liabilities. The US is still feeling the aftershocks of such a crisis, and US monetary policy over the last seven years has been constructed with this in mind. Hence government borrowing has never been cheaper.

The graph to the right, made using Federal Reserve Economic Data from the Federal Reserve Bank of St. Louis, demonstrates as much. It depicts net interest payments as a percentage of total federal revenue over time. The current figure is in the ballpark of 7 percent, which is lower than it’s been in the last four
fredgraphdecades. Notice how the graph remained steady during the recession. That’s because, although tax revenue decreased due to economic contractions, the Fed reduced interest rates enough to compensate. While many fiscal conservatives claimed that the Great Recession meant that the federal government needed to cut back, the ironic truth is that the aftermath of the Great Recession has helped expand the US government’s short-term ability to sustain debt. Although interest rates are starting to inch up, this leeway still very much exists: Rates are far from normalized, the deficit is 70 percent lower than its 2009 recession peak, and continuing economic weakness makes government borrowing a worthwhile tool.

Given the aforementioned evidence, running a budget surplus and chipping away at the national debt does not seem to be immediately necessary. In fact, it might even be self-defeating; budget cutting could actually exacerbate the debt situation. Krugman gave an excellent exposition of this very idea in the New York Times, using Greece to describe the negative consequences of cutting spending without help from monetary policy. He argues that because austerity hurts the economy and thereby reduces tax revenue, it both raises and costs money. Accordingly, rapidly moving from a deficit to a balanced budget shrinks GDP without immediately decreasing the debt. As Krugman points out, this means that the debt to GDP ratio initially goes up in an economy weakened by austerity, because GDP drops while the debt remains the same. This is exactly what’s happening in Greece, where attempts to raise the surplus by one percent could cause a five-point rise in the debt-to-GDP ratio. The fiscal situation worsens even more after accounting for the deflationary effects of austerity. When cutbacks hurt economic activity, price levels begin to decline. A trend of decreasing prices causes people to delay purchases, since their dollars are worth more tomorrow than today in real terms. Less consumption and more savings exacerbate already anemic demand. The result, as Krugman states, is a smaller economy with the same debt — a categorical loss.

The only exception to this pattern is if monetary policy can lower interest rates and mitigate the economic costs of tax increases and/or lower spending. But interest rates are stuck at near-zero percent and cannot go significantly lower — negative rates would take from savers and investors, causing them to withdraw their money from the financial system. So the Fed is no help, and proposed budget cuts would cripple aggregate demand in its already weak state; America would only be backpedaling. This is exactly what we see when we look at the data: Greece’s debt-to-GDP ratio has failed to fall despite several rounds of harsh austerity. Instead, budget slashing simply caused its real income to crater, dropping by 25 percent in just a few years. It seems that fiscal conservatives have formulated a plan of one step forward, two steps backward.

There is a level on which austerity appears sensible for the United States. However, it is a level fraught with intuition, misdirection, and misunderstandings. Rationales for rolling back spending come with the sheen of responsibility. They seduce both lay people and economists. They feed hyper partisanship and help politicians posture. While these approaches are persuasive, their logic is flawed. It is public policy malpractice to see dangers where there are none —doing so raises the risk that the United States flees toward even greater fiscal and economic hazards. So the next time some politician, polemicist, or Average Joe rails against the big spenders in Washington, do not be seduced. Anti-spending crusaders are touting a solution in search of a problem.

On September 8, 2014 — three days before the 41st anniversary of the violent coup that overthrew President Salvador Allende and brought General Augusto Pinochet to power — Santiago, Chile was again rocked by violence. That morning, the Chilean Supreme Court ruled against an appeal from a group of Marxist-Leninist radicals serving time for the 2007 murder of a police officer during a botched bank robbery. In the lead-up to the ruling, radical groups throughout the country had warned that the appeal’s denial would result in retributive attacks.

At roughly 2 p.m., a fire extinguisher filled with gunpowder exploded in an underground shopping center at the Escuela Militar metro station in the affluent neighborhood of Las Condes. The bomb injured 14 civilians and unleashed panic throughout the city. Ten days later, the Conspiracy of the Cells of Fire (CCF), an underground anarchist terrorist group, published a short manifesto claiming responsibility for the Escuela Militar metro attack.

Though the attack was tragic, it was not the first of its kind. The Escuela Militar bombing was one of at least 30 terrorist attacks in Santiago that year alone, and since 2004, over 200 bombings have rattled the city. In some ways, these incidents are relics of the 17 years of oppressive military rule under Pinochet following the 1973 coup. Under Pinochet’s regime of forced disappearances and political oppression, the country’s civil society structures — through which marginalized groups could voice their dissent and participate in governance — broke down.

The country’s legacy of political violence continues today, in large part due to Chile’s mishandling of violent threats and archaic antiterrorism laws. But the Chilean people are demanding drastic change, both through broad shifts in existing policy and, perhaps, the re-imagining of the country’s constitution. The Escuela Militar bombings and other attacks of its kind are a reminder of Chile’s checkered past, and if the nation is to fully move past this troubled history, the government must repeal and replace its antiterrorism laws and treat peaceful protest as a critical channel for political engagement.

By most measures, Chile has seen a great deal of success since its transition to democracy in 1990. The country boasts the highest GDP per capita in South America, a figure that has more than quadrupled in the last quarter-century. The nation’s political institutions also rate as some of the strongest in the region. In 2009, a paper from the Inter-American Development Bank characterized Chile as a country that has successfully embraced democratic institutions and dismissed protests as “sporadic and…[far less relevant] to the policymaking process in general.” Moreover, the Economist recently named Santiago the safest city in Latin America. Nevertheless, the last two Chilean presidents have failed to create the kind of programmatic reforms that citizens have demanded. The civil unrest and anarchist attacks that have plagued Santiago for the past 11 years challenge the popular perception of Chile as a thriving modern democracy and rising economic power.

The Chilean success story looks much less rosy just beneath the surface. The country is ranked one of the most unequal in the world, and 14.4 percent of its population lives in poverty. Support for Chile’s political establishment is also declining: A 2015 study by researchers from the Pontificia Universidad Católica argues that in Chile there is “a growing distance between political parties and the society, in parallel with an increased criticism of electoral processes and representative institutions.”

A proliferation of protest movements and incidents of civil unrest in recent years reflects a growing sense of alienation. Today, 71 percent of Chileans support drafting a new constitution, reflecting the growing hunger for change. In addition to the wave of anarchist attacks that have rocked Santiago since 2004, Chile has also grappled with a sometimes violent indigenous rights movement as well as widespread student demonstrations, beginning with the so-called “Chilean Winter” of 2011. These three movements, in the words of Brown University Professor Arnulf Becker Lorca, “are all connected by general discontent” with the Chilean government’s failure to adequately represent the will of its people.

Perhaps no group has felt that discontent longer than the Mapuche. With over 1.5 million members, the Mapuche are Chile’s largest ethnic minority and have long been politically marginalized. Beginning in 1852, the Chilean government systematically and unilaterally imposed its sovereignty over the Mapuche, who would go on to face a century and a half of political, economic, and social dispossession. Since the country’s return to democracy in 1990, Mapuche activists have sought more political autonomy at the local level. But the Chilean government has often seen the Mapuche’s indigenous rights movement as being at odds with the push for a developed, modern economy.

Beginning in the 1990s, Mapuche activists began protesting large development projects, such as the construction of hydroelectric dams, on land that is culturally significant to their people. The government has mostly ignored the demands of indigenous groups and has even gone so far as to arrest antidevelopment activists and protestors. According to Mapuche activist José Naín Curamil, more than 250 Mapuche have been detained by the government, including Naín himself. In addition to arresting activists, the Chilean government has also continued to plan and develop new hydroelectric projects, an agenda that one indigenous advocacy group calls “a true slap on the face of human rights and [the] interest of the region’s inhabitants.”

Compared to the Mapuche rights movement, the Chilean Winter is a much younger and more popular political protest. Since 2011, student protestors have taken to the streets in cities across the country to demand policy changes. The Chilean Winter first aimed to address high university tuition rates, which represent 2 percent of Chile’s GDP — the second-highest rate in the world — and other failures of the Chilean education system, such as the nation’s privatized schools and underperforming teachers. These efforts have since spurred mass protests against everything from metro fares to laws banning abortions.

In contrast to the anarchist bombings or the indigenous rights movement, the Chilean Winter protests have had a more significant impact. The BBC stated in a 2014 report that, aside from the Escuela Militar metro attack, anarchist bombings were a “nuisance for Chileans rather than a serious threat to public safety.” The student protests, however, have been impossible to ignore. One protest during former-President Sebastián Piñera’s administration brought 150,000 students, professors, and other demonstrators to the streets to demand education reforms.

As a result of Piñera’s reluctance to embrace the reforms demanded by protestors, his party was swept from power in 2013. Voters replaced Piñera with current President Michelle Bachelet, a progressive icon of the Chilean left. Bachelet had previously served as president from 2006 to 2010, and her return to power was aided by promising many of the reforms demanded by protestors on issues like women’s rights and education. While Bachelet initially succeeded in passing a major school reform, progress has slowed, as the president’s attention has been refocused on other issues, such as corruption and a major recession.

The Escuela Militar bombing was one of at least 30 terrorist attacks in Santiago in 2014 alone, and since 2004, over 200 bombings have rattled the city.

However, frustration hasn’t always been channeled into peaceful protest. While all three cases of protest began with the same grievances with the current state of political affairs, the violent Chilean anarchist campaign differs in key ways from the Mapuche rights movement and the Chilean Winter. Mapuche and university student protests have at times turned violent, but these movements have sought to distance themselves from the degree of violence used by anarchist terrorists in the Escuela Militar metro attack.

Historically, Chilean anarchists have tried to avoid civilian casualties. Bombs have typically been detonated in the dead of night, when bystanders are less likely to be caught in the ensuing blasts. The targeting of the underground shopping center last year represented an enormous break with precedent. Anarchist bombers have historically targeted banks, government buildings, and churches — structures that represent institutional systems opposed by anarchism. The symbolism of the Escuela Militar metro attack is murkier. The CCF, in claiming responsibility for the attack, denounced the metro’s shopping center as a symbol of “bourgeoisie commercialism.” Beyond this general commitment to terror tactics and the shared philosophy of anarchism, it’s unclear what holds the violent anarchist movement together. Unlike the Mapuche rights movement or the Chilean Winter, Chile’s anarchist terrorists in the CCF have not yet articulated programmatic policy objectives or an agenda. Instead, they push for broad and diffuse goals, such as an end to consumerism or political oppression. This organizational weakness has made it possible for other political actors to tie the violent anarchist attacks to almost any political or social agenda that suits their need. Following the Escuela Militar metro attack, a handful of left-wing politicians immediately speculated that the incident was a false flag operation on the part of right-wingers to discredit the Chilean left. On the other side of the political spectrum, the right-wing intendant of the Bío Bío region claimed in 2011 that the attacks and other forms of social unrest could be traced directly to mothers having children out of wedlock, saying: “Chile is a country without a family.”

While these explanations may work well as political talking points, they fail to adequately explain the reasons behind this 11-year-long string of bombings. Anarchists aren’t attacking banks, churches, and government buildings because of a right-wing conspiracy or a weakening of family values. Although they differ in their choice to use violent methods, anarchists are bombing the infrastructure of a political and economic system that they, like Mapuche and Chilean Winter activists, believe has failed the Chilean people.

Events like the Escuela Militar bombing will continue until the Chilean government can effectively prosecute bombers and put them behind bars. Thanks to half-measures and the state’s reluctance to address its legacy of political violence, terrorists have been allowed to walk away from attacks without facing serious consequences for their actions.

The case of Mónica Caballero and Francisco Solar is a good example. Between 2006 and 2010, Caballero and Solar took part in an extended campaign of anarchist terrorist attacks known as the “Casos Bombas,” which would ultimately include 30 bombings of churches, government agencies, banks, and other targets. They were arrested in the summer of 2010 following an especially high profile attack just blocks from then-President Sebastián Piñera’s house.

The government was initially thought to have a strong case against the attackers and chose to pursue charges of terrorist conspiracy against Caballero and Solar. However, the prosecution’s sloppy handling of the trial ultimately led to the case being dismissed two years later. The prosecutors and judicial officials responsible for the bungling of the trial faced fierce criticism at the time, with former Interior Minister Andrés Chadwick stating, “I believe that some of our courts of justice owe an explanation.” The state’s failures in the Casos Bombas would come back to haunt it just two years later when Caballero and Solar were connected to a terrorist attack, this time on the Basílica Pilar de Zaragoza in northwestern Spain.

The reluctance of the Chilean judiciary to try bombers as terrorists stems from Chile’s complicated history of terrorism. Until 2004, Chile had not experienced a major terrorist attack since the waning days of the Pinochet regime. The law used to prosecute terrorists today is the same antiterror law used by Pinochet to illegally detain political prisoners in the 1970s and the 1980s. During his 17 years in power, Pinochet oversaw the forced disappearances of over 3,000 Chileans and used this antiterrorism law to imprison 40,000 political dissenters. Not only is the law outdated and ineffective for addressing today’s terrorist threats, it is also widely unpopular in Chile due to its questionable history and extreme provisions. The law allows police to keep suspects in solitary confinement indefinitely without leveling charges against them and allows for the use of wire-tapping and secret witnesses in investigations. The Chilean public overwhelmingly opposes the law, making it very difficult to put accused bombers, like Caballero and Solar, on trial for terrorism charges.

In the ten years leading up to the Escuela Militar metro attack, the Chilean government jailed only one individual on terrorism charges. Others, such as Caballero and Solar in the Casos Bombas, had been brought to trial but ultimately had their charges dismissed or their cases thrown out. One accused bomber, Luciano Pitronello, was brought to trial in 2012 after he attempted to blow up a bank in Santiago. The explosive detonated in his hands, forcing him to seek medical attention and foiling his plan. After being brought to trial on terrorism charges, Pitronello was ultimately sentenced to six years of probation and had all charges carrying a prison term dropped, despite the fact that all of his actions had been caught on camera.

The Chilean government’s inability to prosecute anarchist terrorists under the Pinochet-era antiterror law raises the important question of why the law hasn’t been successfully changed or replaced with newer, less controversial legislation. In fact, ever since Chile’s return to democracy, there have been efforts to do precisely this. Piñera instituted some reforms to the law in 2010, but activists argued that these changes did little to alter the overall nature of the legislature. In both her 2005 and 2013 campaigns, Bachelet opted to distance herself from the law altogether, saying that Chile “does not need [an] antiterrorism law” and that, if elected, she would rely on other statutes to prosecute terrorist crimes.

Both Piñera and Bachelet kept the law on the books, however, and used it — albeit largely unsuccessfully — against accused terrorists. The Piñera administration used the law in its prosecutions of Caballero, Solar, and Pitronello. Ultimately, the administration failed in its efforts because of fierce opposition from the Chilean judiciary system — not because of any unwillingness to charge suspected criminals using the controversial law. Bachelet broke her campaign promise not to use the law during both of her terms, invoking it against Mapuche activists as well as against the Escuela Militar metro bombers.

In the immediate aftermath of the Escuela Militar metro attack, the Chilean government failed to provide the full-throated denunciation of the attacks that many had hoped for. President Bachelet at first claimed that, even after the bombing, “it can not be said that there is terrorism in Chile,” despite the fact that the bombing was a clearcut example of an organization inciting terror by attacking civilians and infrastructure. After a cabinet meeting later that day, however, a Bachelet administration spokesman decried the attack as “an act of terrorism” and announced that the government would work to bring the attackers to justice using the controversial antiterrorism law.

Sure enough, the Chilean government followed through, arresting three suspects — Natalie Casanova Muñoz, Juan Flores Riquelme, and Guillermo Durán Méndez — within two weeks of the attack. Following the arrests, the government kept up its tough-on-crime rhetoric and appeared to be close to the verdict that it needed to regain order and silence its critics. But as time has gone on, the Chilean government seems to have fallen back into its old habits. There has been no new effort to prosecute terrorists, and no verdict has been handed down for Casanova, Flores Riquelme, and Durán. Fourteen months after their arrest, the three are still being detained without trial under the antiterrorism law.

In September 2014, student protest leader turned politician, Francisco Figueroa, summarized the root causes of civil unrest in Chile, saying, “it isn’t just a problem of Sebastián Piñera’s government, this is actually an institutional problem.” The continued use of the deeply unpopular antiterrorism law by both Piñera’s right-wing government and Bachelet’s left-wing government speaks to this institutional problem itself. Despite a population clamoring for reform, Chile’s leaders have continued to use the law. For many Chileans, every instance of its usage evokes the repression of the Pinochet era and thus undermines the country’s democratic progress.

The Escuela Militar metro attack presents a crossroads for Chile. In the face of a devastating terrorist attack born of a general sense of dissatisfaction with Chile’s supposed success as a developing democracy, the Chilean government must answer a clarion call to reform the way that the country tackles both political violence and responds to political protest. If the Bachelet government fails to answer this call to action, it risks losing the confidence and support of the Chilean people. Perhaps nowhere is this clearer than with the University of Chile Student Federation (fECH), the leading organization in the Chilean Winter student protests. In 2013, the fECH elected Melissa Sepúlveda to the organization’s presidency. Sepúlveda, an avowed anarchist, represents a radical turn for an organization at the heart of Chile’s political future. In a radio interview shortly after the 2013 elections, Sepúlveda made clear that the fECH expected little reform from the government, declaring, “the possibility for change is not in Congress.”

Nonetheless, there is some reason for optimism. Bachelet recently announced a major reform that could give even a critic as hardened as Sepúlveda a reason to hope for change. In October 2015, Bachelet declared that her government would begin the process of replacing Chile’s constitution, a document that dates from the Pinochet dictatorship. In doing so, Bachelet is creating an opportunity for Chileans who feel excluded from the political process to weave their voices and their concerns into the very fabric of how the government operates. It offers the Chilean government an opportunity to rewrite its outmoded antiterrorism law and replace it with something that better represents the model of democratic progress Chile strives to be.

In Libya, the Islamic State yet again rears its head. Though the opening of this new frontier does mark a threatening expansion of ISIL, it is altogether more indicative of the weak Libyan state into which the group has spread. ISIL has easily established itself in the fractured state, but its presence should not be treated in the same manner as in Syria and Iraq, with bombs and air raids. If ISIL is to be driven out of Libya, the Libyans themselves, as well as the international community, must instead focus on addressing the fractured nature of the country and work towards the establishment of a unitary and representative government.

The Islamic State announced its arrival in Libya with the release of a video in which the group barbarously beheaded 21 Egyptian Coptic Christians. These killings invoked the Egyptian government to promise to “avenge the bloodshed and to seek retribution.” Soon after, Egyptian air strikes on ISIL strongholds were launched, killing an estimated 40 to 50 individuals in ISIL controlled areas. This response achieved little in regards to pushing the Islamic State out of Libya or addressing Libya’s weak state capacity.

On March 5, as Tripoli recovered from the most recent bombings, UN-sponsored talks were launched in Rabat in an attempt to reconcile the two rival Libyan governments, the Council of Deputies in Tobruk and the New General National Congress. Both governments claim legitimacy as Libya’s central governmental power and have hitherto refused to negotiate. Nonetheless, according to Bernardino León, the Head of the United Nations Support Mission in Libya, (UNSMIL) “There is a sense, of, if it’s not optimism, at least a sense that it is possible to make a deal, and that is something very important because in the last months, this was not the case.”

The resumption of talks may signal a renewed effort at diplomacy and addressing the structural issues at hand. And, despite Egyptian President El-Sisi’s calls to launch an international military intervention force, diplomacy and assistance represent the ultimately best means by which the rest of the world can assist Libya.

Post-Revolution Libya

In 2011, the Arab Spring saw the rise of mass protests throughout Libya against the dictatorship of Muammar Qaddafi, calling for his ouster and a more representative government. With the help of a NATO bombing campaign in October 2011, the interim government established after the fall of Qaddafi, the National Transition Council (NTC), declared Libya to be “liberated” and announced plans to hold democratic elections. However, since the overthrow of Qaddafi, Libya has yet to find its footing.

In March 2014, the General National Congress, which took over from the initial transitional government, voted to establish the Council of Deputies (also known as the House of Representatives) to replace itself, all amidst rising public discontent with the government. Elections were held soon after. Both the low turnout rate and the high numbers of elected Federalists and Nationalists, in comparison to the 30 percent of seats gained by Islamic groups, gave rise to renewed protests and violence particularly amongst Islamic political groups and militias.

In May 2014, General Haftar of the Libyan National Army, affiliated with the Council of Deputies, launched attacks in operation “Libyan Dignity” against the terrorist group Ansar al-Sharia in Benghazi. This group is considered a powerful Islamic militia faction, and is also thought to be responsible for the murder of the US Ambassador to Libya, J. Christopher Stevens, in that city in 2012. The “Libyan Dignity” campaign launched a civil war, causing Islamist political groups and militias to unite as a movement called “Libya Dawn” against an increasingly alienated government in Tobruk. The Libya Dawn launched an offensive in the eastern region of the state, eventually captured Tripoli in August 2014 and established a new, autonomous government.

Today, Libya remains divided between the two powerhouses: the New General National Council in Tripoli, and the internationally recognized Council of Deputies government in Tobruk. Both governments have their own infrastructure, including separate central banks and parliaments, and each control 10 percent of Libyan land. The rest of Libyan territory falls into the hands of small religious and tribal-based militias, many of which are loosely affiliated with either one of the governments.

The involvement of interventionist forces fundamentally altered the dynamics of power in Libya, both between Qaddafi, the opposition and  within the opposition itself, resulting from the favoring of specific rebel groups.

International Intervention

The 2011 revolution and armed conflict that removed Qaddafi from power was capped by a NATO-led air assault on the leader’s power centers in Libya. These air raids proved highly effective and significantly contributed to the overthrow of the Qaddafi dictatorship. Nonetheless, the intervention was criticized for overstepping its mandate as stipulated in UN Resolution 1973 and the “Responsibility to Protect” doctrine. What is perhaps more damning are the conditions in which NATO forces left Libya. The involvement of interventionist forces fundamentally altered the dynamics of power in Libya, both between Qaddafi, the opposition, and within the opposition itself, resulting from the favoring of specific rebel groups. Such changes in governance and leadership were not responsibly monitored. At the end of the NATO intervention, Libya was left to reorganize on its own.

The only main international force still in Libya today is UNSMIL, which established operations in March 2014. It has hitherto had little success in uniting the competing Libyan factions to form a unified government. However, the most recent round of UN talks in Rabat are promising in that both rival governments are willing to partake in dialogue, a feat thought unthinkable only a month ago.

2011’s intervention should not be repeated. Instead, the international community must support UNSMIL in its effort to reconcile the two rival governments. Moreover, the covert roles of other states such as the UAE and Saudi Arabia in funding militias and launching bombing campaigns, particularly against the Libya Dawn government in Tripoli, must end. Instead an international response such as that taking shape in the form of the UN talks in Rabat must further materialize. To reinforce the legitimacy of these talks, other institutions such as the European Union, African Union (highly critical of the 2011 NATO intervention) and the Arab League must too play roles.

ISIL should not be fought in Libya as it has been in Syria and Iraq. Instead, the Libyan government itself must address the presence of the Islamic State, together with the support of the international community. The recent airstrikes launched by Egypt should not serve as the international community’s response, and the calls of El-Sisi for military intervention should not be heeded. ISIL has festered in the cracks of Libya’s highly fragmented state. Instead of further widening such divides through military intervention, these fractures must fast be addressed.

While a recent Gallup poll showed that 50 percent of the public prioritizes environmental issues over economic growth, only 24 percent identify it as a critical policy area. This indicates that although the public highly values the environment, it doesn’t approach the issue in a political sense so much as a cultural one. Much of today’s widespread environmentalism seems increasingly passive and employed on the individual or communal level — perhaps due to repeated setbacks on the political stage. After all, the intense debate over the Keystone XL pipeline, the struggles within the EPA to develop better pollution rules, and the repeated failures to pass carbon taxes and cap-and-trade programs may have tempered the enthusiasm and resolve of the environmental movement. These unsuccessful maneuvers indicate the American public’s waning belief in the government’s ability to be proactive in the environmental realm. What’s needed now is an intervention.

Lately that intervention has manifested in a new and especially powerful tool for environmentalists: geoengineering, or the deliberate alteration of environmental processes. Perhaps “new” is a misnomer. While geoengineering has experienced a resurgence — it’s currently being used in California to combat drought — the strategy has actually been implemented for decades. Cloud seeding, the most common geoengineering technique, is the attempt to influence the amount or type of precipitation through the atmospheric dispersion of substances. It originated in the 1830s, when James “Storm King” Espy, believing that the smoke would stimulate rain, suggested that the US government burn down forests. A century later, showmen in the West launched rockets containing catalysts into the clouds to induce artificial precipitation. The practice reached its peak in the Depression-era Dust Bowl, and while it would sometimes produce rain, it didn’t stave off drought.

Though cloud seeding was born from economic desperation, it came to unexpected prominence as a military technique. During the Cold War, the US military became increasingly interested in the wartime opportunities that geoengineering provided. That research came to fruition during the Vietnam War; in order to limit the movement of North Vietnamese forces, the military dropped silver iodide flares — thought to cause rainfall — over enemy territory. The project, dubbed “Operation Popeye,” was meant to slow the efforts of the Vietnamese army to move men and supplies during the dry season. Instead, the effect of the cloud seeding fell on civilians and likely caused the catastrophic floods and typhoons in North Vietnam that devastated much of the country’s harvest in 1971.

These unintended consequences of geoengineering demonstrate two key principles. First, weather modification can be successful, but dangerous. Co-opting the environment — whether through dams or silver iodide dispersion systems — is always risky. Second, militarized weather modification constitutes a total war strategy, as the attacks affect both military and civilian figures. This reveals something unique about weather modification: It can seem benign when wrapped up in environmentalist packaging, but even brief military experimentation reveals the ominous depth of geoengineering’s effects.

In light of this troubling reality, the United Nations created the 1977 Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, a declaration with 48 signatories — including the United States — that bans weaponized or hostile use of environmental modification. Regardless, the US military continued to pursue domination of weather modification techniques well after the convention; in the late 1990s, the US Air Force Academy produced a paper entitled, “Weather as a Force Multiplier: Owning the Weather in 2025.” The report, often referred to as Air Force 2025, details a series of futuristic systems that the military could develop in order to control weather patterns as a strategic asset. The Cold War may have been the beginning, but Air Force 2025 ensured that interest in weather modification did not end with the fall of the Berlin Wall.

Cyclic storms on demand or Zeus-like thunderbolts dropped from drones may not be realistic, but it’s easy to see why the military had experiments to that effect in mind. As Air Force 2025 points out, “A tropical storm has an energy equal to 10,000 one-megaton bombs.” The bomb dropped on Hiroshima released only 0.016 megatons of energy. However, no storms-on-demand will be cycling through anytime soon, as the report’s team largely failed to spark a weather modification revolution, and the technology needed remains far in the future. But some of the paper’s goals, at least for hyper-accurate weather monitoring, are nearing completion, and the success of the military in this field indicates a sustained strategic interest in the environment as an asset. The radar communications technology credited with a major role in the 1991 Gulf War, for example, has its roots in weather radar research. And it’s the military’s environmental monitoring technologies that produce the nighttime satellite imagery critical to US efforts to aid recuperation from natural disasters.

But weather monitoring isn’t without its skeptics, who view it as a conspiratorial cover-up or a next step towards governmental weather control. In March, approximately 17,000 activists in Australia turned up to protest the country’s current government. Among those was an odd constituency with an imaginative message: America was controlling Australia’s weather. The High Frequency Active Auroral Research Program (HAARP), funded by the US Defense Advanced Research Projects Agency (DARPA), is a project that government officials have repeatedly said is designed for weather monitoring, but many have suspected it of being for weather control. HAARP is also reportedly part of US radio communications and surveillance projects. Because of its secretive nature, it has been accused of everything from disabling satellites to mind control, the Gulf War Syndrome and the destruction of the space shuttle Columbia.

This kind of paranoia about weather manipulation demonstrates the pervasiveness of today’s skepticism towards government involvement in environmental issues. While there is much to admire about environmental noninterventionism’s focus on local and private solutions, the conspiracy theorists at the movement’s fringes have negatively influenced attitudes towards geoengineering. DARPA probably isn’t using HAARP for mind control, but with the political nonstarter of conspiracy theory attached to it and other weather monitoring and control programs, there is now a distinct lack of pressure from the public for politicians to seriously consider even the mildest of geoengineering solutions. In short, the suspicion and distrust of a few have been a constant barrier to useful research and even-handed experiments in the environmental field.

As fantastical as some geoengineering plans sound, many come from a place of scientific rigor and could provide major environmental benefits if taken seriously. Take Nobel Laureate Dr. Paul Crutzen’s idea to shoot sulfate aerosols, which have a demonstrated atmospheric cooling effect, into the stratosphere in order to block sunlight and combat climate change. Although Crutzen’s editorial surfaced in 2006, the fundamental idea for weather modification as a climate change combatant is old news. In 1965 President Lyndon Johnson received a report titled, “Restoring the Quality of Our Environment,” which suggested research into the possibility of initiating climatic effects to counterbalance the atmospheric concentration of CO2. Yet it has only been in the past decade that these possibilities have been more fully explored. Since then, a larger body of academic work has demonstrated the long-term feasibility of Crutzen’s plans. The key phrase here is long-term feasibility; there is no clear consensus among scientists — or politicians — on the viability of current geoengineering technologies, or on which proposals may be the best and most cost-effective. Crutzen’s work is no exception, especially given the difficulty of aerosol delivery and distribution.

Geoengineering’s problems also bring with them the probability of dissent. Gauging consent among those affected will be difficult. Efforts like space reflection mirrors or stratospheric aerosol release can’t be localized, raising questions about whether any one country has the right to use these technologies, as well as questions about who would bear their cost. Smaller projects that would circumvent the ownership debate have their own problems. When California’s cloud seeding recently came to light, controversy followed, despite the fact that the state had just experienced one of the driest years in its history. Artificially induced rainfall effectively “steals” rain from surrounding areas and, if normalized, would cause a redistribution of rainfall between locations. As such, a complex debate about water rights surrounds the practice.

Conspiracy theories aside, there are scientifically rigorous critiques, which portray geoengineering as just another Band-Aid — albeit a big one. Modification is simply an adaptation strategy for dealing with environmental concerns, and while it would likely be effective in mitigating the impact of, say, climate change, it doesn’t tackle the source. As Dr. James Lovelock, a prominent environmental scholar, observed, “Consider what might happen if we start by using a stratospheric aerosol to ameliorate global heating; even if it succeeds, it would not be long before we face the additional problem of ocean acidification. This would need another medicine, and so on. We could find ourselves in a Kafka-like world from which there is no escape.”

This is not a trifling problem for the practice of geoengineering. Then again, neither is climate change a problem to be trifled with. It demands attention, and ultimately pressure, from a public that is content with leaving environmental activism in the realm of ecotourism and organic yogurt bars. After all, climate change is already Kafka-esque; it’s estimated to create 50 million environmental refugees by 2020. Consideration of geoengineering is especially prudent given that the ideal solution to global warming — one that would tackle the source of the problem — has so far proved impossible to find. It is time to approach this global issue both creatively and persuasively. Weather modification, though more salve than cure, may prove to be just what the doctor ordered.


Art by Olivia Watson

The Sochi Games have closed, and I am sure many citizens of Russia are wondering: was it worth it? The Olympics are expensive and disruptive to the host country. However, they are exceedingly popular with voters and bring a great sense of pride. They can also have huge symbolic power. The 1992 Barcelona games, for example, symbolized how Spain went from a dictatorship to an Olympics-hosting democracy in just 17 years.

So, many debate whether the or not the Olympics were worth it.

One piece of this debate played out last year in Atlanta. The Braves are leaving Turner Field, the stadium from the 1996 Olympics, for the suburbs. The Braves leaving the city limits represents a blow to downtown Atlanta. Rembert Browne, Atlanta native and ESPN writer, wrote a beautiful piece about this on ESPN’s Grantland website. Do go read it. However, the Braves moving does give one a sense of the working life of a stadium in America these days: 17 years. Turner Field was built brand-new for the 1996 Olympics, and current plans are to demolish it after the new stadium is built.

Occurrences such as this are why a publication like The Economist, an outlet as British as can be, refused to endorse the Olympics coming to London. They said to let Paris handle the hassle. They estimated the Beijing Games cost $40 billion, in addition to the disruption of daily life in Beijing. Many of the detractors of the Olympics also point to Greece, which hosted the 2004 Olympics. Most of the facilities created for that event are no longer even used. They say the event was a one-time shot for the country, not something that generated sustained tourism or international interest.

Some cities even offer a sort of counteroffer to the Olympics. New York put out a marketing campaign in 2012 around “skip the Olympics, come to New York,” trying to lure some international tourists away from the expensive Olympic crowds.

For a test of how much development happens due to a major event, we can look to Brazil. This year, Brazil will host the World Cup, followed by hosting the 2016 Olympics. If, at the end of 2016, things are looking up in Rio, proponents of the Olympics will say the games provided a major boost to the city.

We shall see, at the end of 2016, how those two major events affected Rio. There is widespread agreement that hosting many smaller events provides great economic growth. Keeping hotels full and restaurants busy truly helps a city. That is why nearly every city has a convention and visitors bureau. From what I have heard, most Londoners think the 2012 Olympics were a great success and are very proud of hosting the games. I wish I could go ask the citizens of Sochi what they think. In Rio, people are already protesting the World Cup and Olympics as wasteful spending. The debate continues.

If you would like to watch two economists debate the merits of hosting the Olympics, click here.

If you would like to read The Atlantic Cities’ Stephanie Garlock’s great analysis of the Braves Stadium moving, click here. She is one of The Atlantic‘s Urban Wonks. #dreamjob