Dr. Rui Maria de Araújo is the prime minister of Timor-Leste, a position he has held since 2015. A physician by training, he served as both the Minister of Health from 2001-2006 and the Deputy Prime Minister from 2006-2007. 


What is something most people do not know or are surprised to learn about Timor-Leste?


People are surprised to learn that we became independent and went through [the] process of nation-building and state-building. We are a vibrant democracy. People think: “Oh okay, we thought you were on and off in conflict.” That’s something that when you have conversations with people, people feel a bit surprised that we have come so far.


What lessons and skills from being a doctor do you bring to your role as prime minister?


I’m a medical doctor, but also did post-graduate studies in public health, focusing on health policy, management, and financing. I think one important thing that I bring in from my profession as a medical doctor is that you make decisions on the basis of evidence. When you face a patient, you go through all the evidence, make a diagnosis, and then start the treatment. Policy-making is more or less the same. Of course, it’s not as simple when it comes to public policy, but the principle of using evidence to assess policy options that are available and then [making] a final decision on which course we should be taking, to me, [they] are the same.

How do you plan to address systemic poverty in Timor-Leste, along with related problems such as malnourishment and low life expectancy?


We have a Strategic Development Plan guiding the overall development of our country. When we restored independence [in 2002], we had [a] National Development Plan, but five years down the line, we reviewed it, then [created the] 20-year Strategic Development Plan [to be in place] from 2011-2030. It has four main components: focusing on human capital, basic infrastructure development, institutions, and enabling economic development. Within that framework, our focus is to diversify the economy of our country, so that young people get more jobs, get more opportunities to be educated, enter the market, and become more active in our economic development.


Now, so far, most of the economic development in the country is driven by public expenditure. Private investment is still very low. From 2009 up until now, we’ve – in terms of public spending – spent up to $7 billion. It’s a small country of 1.2 million. We spent that much on our basic infrastructure, social programs, health, education, agriculture, and so on. The latest figure shows that there [has been] some good progress. Life expectancy has increased. Infant mortality rates have gone down. Poverty has been reduced, despite the fact that it is still high. But progress is seen in the pace of reduction [of the poverty rate]. More and more people are getting into schools. More and more people are getting jobs – despite very limited jobs since the private sector hasn’t come in in full force yet. The next five to ten years [will] focus on economic force, particularly in the areas of tourism, agriculture, fisheries, and basic manufacturing, in order for us to diversify our economy and get more job opportunities.
What is Timor-Leste currently doing to improve conditions for refugees and immigrants coming into the country, and what lesson can other countries learn from Timor-Leste’s handling of its historic refugee crises?


Well I think I’ll start by saying that in 1999, we had the experience of managing internally displaced people. Some of our people migrated in 2006, and, to solve the problem of internally displaced people, the government took control of the process, while the UN agencies were complementing [the government’s efforts]. So that experience also led us to lead a group called g7+, which is active in many conflict countries.  In the context of advocating for better coordination amongst the agencies and countries…I think the principle is that it should be country-led, meaning if it is a problem of the Central African Republic, the authorities there should be the ones leading the process and all the agencies [should] support that process.




Pirates — those ancient swashbucklers or their contemporary illegal-downloading counterparts — seldom conjure images of political savvy or engagement. Iceland, however, is a different case. The island nation in the north Atlantic has seen its political landscape altered by self-styled “pirates.” This transformation hasn’t erupted from smash-and-grab stunts or insurrection; these “pirates,” rather, are members of an upstart political party, aptly named the Pirate Party, which recently tripled its membership in the Icelandic Parliament. By achieving mainstream political success, the Pirates have distinguished themselves from other movements with distinct, populist roots. And unlike their forebears, the Pirates will be able to directly bring grassroots concerns to a national legislative body. This ability marks the Pirates as a unique exception to the landscape of contemporary populist movements. Though the Bernie Sanders campaign — and the Occupy movement before it — captured and channeled powerful anti-establishment frustrations, such movements were unable to secure broader support or success. Those movements may have altered the liberal landscape, but ultimately their bark lacked a truly meaningful bite. The Pirate Party’s success, however, is both bark and bite; it represents a dissolution of distance between the electorate and the political elite. But it is not without unique challenges, namely: Can the party stay true to its ideals while inhabiting the very stations of power and influence that it set out to reimagine?

The answer to this question lies, at least in part, in the biographical details of the Pirate Party. For starters, the Icelandic Pirate Party, formed in the fall of 2012 by Birgitta Jónsdóttir, a former Wikileaks activist, is actually an offshoot of an identically named party that started in Sweden in 2006. The Swedish Pirates initially focused on combating European copyright laws, which Jónsdóttir has called “draconian.” They rejected the laws by citing their perceived inflexibility. They considered them both arbitrarily different, even within the European Union, and claimed they did little to protect “the rights of the public.” A little less than a decade later, it seems as though the Party has been successful. In 2015, a representative from the German branch of the Pirates was selected to lead a revision process for European copyright laws.

This seminal episode, despite its humbleness, speaks to the ideological stances that continue to define the core of the Pirate Party’s platform. In the words of Jónsdóttir, their platform is singularly focused on advocating for “civilian’s rights.” The seeming nebulousness of this term is not an accident. Rather, it reflects the increasing relevance of discourse related to the interactions between technology, government, and personal privacy. Importantly, this notion of civilian’s rights has served as more than lip service for the party. The translation of a philosophical stance into an actionable reality, best exemplified by the 2015 revision, bolsters the bravado of the Party’s stance. In doing so, it places the Pirates’ ultimate aim of “moderni[zing] how we make laws” within the realm of political possibility. Further, it demonstrates, in part, that the Party is capable of both inhabiting legislative power and reforming it.

The Pirate Party’s success is both bark and bite; it represents a dissolution of distance between the electorate and the political elite.

The Pirates have also offered a rather concrete vision of what this “modernization” process might entail. In simplest terms, it would be a return to direct democracy. This measure, they hope, would do more than solely redirect political power to Iceland’s citizens. If properly implemented, it would increase governmental transparency while encouraging private engagement with, or interest in, political affairs. Further unique to the Pirate Party is its dearth of hard and fast policy positions. On questions like Iceland’s potential admission to the EU, for example, the party would put its money where its mouth is and let the Icelandic people decide via referendum.

These measures are far from a panacea, however. One need not look further than the United Kingdom to see the potential pitfalls of leaving issues of national importance in the hands of the electorate. Jónsdóttir acknowledges the potential for misinformation to taint the democratic process and would call for an “informed campaign” to adequately spell out the pros and cons of a given referendum. Yet, it is not clear what an “informed campaign” might entail. Ultimately, this exemplifies larger concerns regarding how the Pirate Party would implement its vision for Iceland or how it would potentially govern without explicit policy positions.

These concerns have done little to detract from the party’s popular appeal. Indeed, the party’s recent gains in the legislature suggest that an attitude of suspicion towards the traditional political elite outweighs vague concerns regarding implementation or practicality. In that regard, it is not without cause that Iceland has been home to such popular support for the Pirates. The past decade has been littered with episodes that have likely undermined popular faith in both government and big business. In 2008, the country was rocked by a financial crisis, after three major banks — together ten times the size of the national economy — collapsed. Iceland did stage a remarkable recovery, with GDP reaching surpassing pre-collapse levels in 2014, but frustrations still lingered as the Parliament neglected to ratify a new constitution that garnered 67 percent approval in a national referendum. And in April of this year, Prime Minister Sigmundur David Gunnlaugsson resigned after documents that were part of the Panama Papers release indicated he might have harbored a conflict of interest.

Ultimately, the electoral success of the Pirate Party reflects Iceland’s shifting sociopolitical clime. However, the disillusionment with career politicians, traditional political parties, and ineffectual rule that catalyzed this change will likely dominate election outcomes in continental Europe and farther afield.

That being said, Iceland’s Pirate Party may still be an exception, rather than a rule. The island nation is home to a particular blend of technological savvy, political openness and optimism, and a history of ineffectual governance that provides opportunities for upstart political movements. These factors likely contribute to the broader success of the Pirate Party, especially when compared to American counterparts like the Sanders campaign. But, despite this optimal environment, the Pirates still only hold about a sixth of the Icelandic Parliament. The burden of representing populist ideals when trying to bridge political divides is, seemingly, a challenge on either side of the Atlantic. Time will tell if the idealism and radical stances that define the party will continue to flourish within the confines of the legislature. Regardless of that outcome, the rise of the Pirate Party is a powerful example of how popular politics can land in the national arena.


While the proposition to legalize marijuana has taken up the most oxygen of Massachusetts’ ballot initiatives, another issue is just as consequential: charter schools. Question 2, if approved, would expand existing charter schools and/or authorize the creation of up to twelve additional ones. Earlier this year, the measure seemed poised to pass, with one poll in March finding nearly 75 percent support. But over the past few months, several prominent political figures in Massachusetts – including Senator Elizabeth Warren and Boston Mayor Marty Walsh – have come out in opposition to Question 2 by raising concerns about how it would reallocate money intended for other public schools. Voters have been receptive as a recent WBUR poll found only 41 percent of respondents now support the measure, with 52 percent opposed.

Although issues of funding allocation do merit public consideration, the problems with charter schools run deeper than just who gets what money; citizens should also be concerned with how charter schools spend the money they do receive. As publicly-funded schools, they should be held to the same educational standards as public schools. However, charter schools across the United States often fail to pull their own weight, especially in the realm of special education, as they take in a smaller proportion of special needs students than public schools. This practice is unfair to the students, parents, and taxpayers of school districts everywhere: special needs students should retain the ability to attend charter schools as they please – just like everybody else – and public schools shouldn’t have to disproportionately support special education programs while simultaneously losing funding to charter schools.

Though privately run, charter schools are still considered “public” because their funding comes from the education budgets of the cities and towns they serve. Parents may choose to send their child to a charter school as an alternative to their district’s standard public school, and funding is allocated on a per-student basis. Specific details vary by state, but typically charter schools receive the same amount per student as would have been spent on that student in their regular public school district. Some charter schools also receive private donations and funding, which are not always publicly disclosed, as charter schools are not always forthcoming with making their budgets public.

Technically, charter schools are “open to all children” who desire to attend and must take in all who apply, but if the amount of students applying exceeds the capacity of the school, most charter schools claim to employ a random lottery system. However, this is where the process gets murky. A Washington Post fact-checking report and analysis of charter schools claimed that there is “no empirical evidence” to support the National Alliance of Public Charter Schools’ claim that charter schools are “generally required to take all students who want to attend.” Furthermore, the Post’s piece illustrates how some charter schools use admission tests and other push-out techniques to avoid taking in low-performance students.

Charter schools across the United States often fail to pull their own weight, especially in the realm of special education, as they take in a smaller proportion of special needs students than public schools.

These exclusionary practices conflict with the public aspect of charter schools. Unlike public schools, where all students are provided with access and potential to succeed, charter schools often determine which students will have the opportunity to fulfill their potentials and, more importantly, which will not. While these schools, intentionally or not, have excluded low-performing students from participating in a charter school experiment – a deplorable but understandable practice, a more sinister exclusionary operation exists with respect to special needs students. Many charter schools take in a disproportionately low amount of students in need of special education. In Massachusetts, charter schools have consistently taught a lower proportion of special needs students (usually around 75-80 percent of the total) than state public schools in each of the past eleven years; in Los Angeles from 2013-2014, the percentage of students with severe disabilities at public schools was more than three times higher than that at local charter schools. If charter schools wish to retain their public funding, which would otherwise go to schools that cannot and do not discriminate against special needs students, they should be held to the same expectations as those public schools. In order to combat this systematic discrimination, however, charter schools must first become more transparent about their finances and operations. Without taking care of the underlying issue of concealed business practices, any legislative action can be worked around surreptitiously; for example, even if states implement a mandatory minimum percentage of special needs students for charter schools, without oversight they can continue to selectively accept only the highest-functioning of the special needs students who apply. Moreover, if charter schools are going to receive public (and private) funding, they should have to disclose their finances so that school districts can ensure that proper funding is being directed towards special education.

A recent episode of Last Week Tonight explained that many charter schools are overseen by amorphous education management organizations (EMOs) – private companies specializing in education. Unlike public entities (like the school themselves), these privately-held companies can solicit private funding and donations and are not legally bound to release their finances, a provision that ultimately conceals the exact spending practices of many charter schools and belies their public attributes. Moreover, most charter schools are reluctant or even non-responsive to requests for information about their contracts with their EMOs; in its fact-check, the Washington Post sent Freedom of Information Act requests to more than 400 charter schools, and only 20 percent responded with the requested contract information.

Even if charter schools are eventually forced to become more financially transparent, that’s only a first step. State governments also need to reform the way these schools are allowed to operate. (It’s up to state governments because education varies too much between states for federal action to be reasonable or enforceable, and relying on local governments would be tricky given that many charter schools serve multiple municipalities.) Most importantly, there needs to be more independent oversight regarding which students get to attend charter schools; otherwise, the schools would still have the ability to exclude special needs students. Making the lottery process more transparent or perhaps even having the school district run it (rather than the charter school itself) would significantly reduce the potential for discriminatory selection practices. Potentially and perhaps more drastically, state legislatures could offer charter schools an ultimatum: either match the proportion of special needs students in their student bodies to that of other local public schools, or significantly cut their funding. None of these measures seems forthcoming; despite advocacy groups and baseline statistics that indicate a lower rate of disabled students at charter schools, few politicians have even taken notice. In some states like Pennsylvania, government officials have been outspoken about how charter schools drain public funding, but the discrimination angle remains largely untouched by legislators.

The same can be said of Massachusetts where many of those opposed to Question 2 have emphasized the detrimental effects such expansion will have on public schools, but no one has started making steps toward anti-discrimination as of yet. Whether or not Massachusetts voters approve the expansion of their state’s charter schools, the charter school system clearly could use some re-examination. Even if most charter schools don’t deliberately discriminate against students requiring special education, just having the capacity to be able to do so is problematic enough. Considering that charter school funding could be directed toward improving special education at schools that don’t have the ability of self-selection, it’s crucial to make sure that money is being spent fairly and responsibly. It is up to the lawmakers of Massachusetts and other states to recognize these issues and right the wrongs charter schools continue to commit.


Lobbying is loosely defined by each state as “an attempt to influence government action,” and in the eyes of many, the industry is highly untrustworthy. In a 2011 Gallup poll, 71 percent of Americans said they believed lobbyists have too much power and influence within our government. But regardless of prevailing sentiments toward lobbyists, their work unfortunately remains completely legal; they are hired by private companies, which are entitled to spend their money at their own discretion.

While private companies’ use of lobbyists is widely accepted, government agencies tread into an ethical and political gray area when they hire private lobbyists and use taxpayer dollars. And yet, it happens all the time: the government, especially boards and agencies at the state and municipal level across the nation, hires private lobbying firms to advocate on their behalf to other parts of the government. This practice of tax-funded entities’ using public funds to hire private lobbying services is neither ethical nor right, as the taxpayers who subsidize these shady efforts command no say for which policies their money is used to lobby. Additionally, this continued usage creates and exacerbates communication struggles within government.

Given the potential pitfalls of this policy, it seems surprising that only ten states have statutes that prohibit the use of public money by government entities for lobbying services. That leaves 40 states and their various boards, districts, and agencies to spend taxpayer money as they see fit. In many of those states, lobbying constitutes no small portion of the budget: Data from the office of the California Secretary of State indicated that local government entities spent $110,153,550 on lobbying services from 2013 to 2015. In Texas, the state’s Ethics Commission found that $29 million dollars were spent by publicly-funded entities on lobbying the state legislature. These are neither isolated incidents nor exceptions to the rule; the Show-Me Institute, a Missouri-based think tank, estimated that about $2.7 million of taxpayer money was used for lobbying by the state of Missouri in 2012. Prior to the executive order that ended publicly funded lobbying in Arizona, an estimated $1 million dollars was being spent toward this practice annually. There exists no discernible pattern as to what policies are most often lobbied for with this money, nor which agencies or departments do so the most; this practice spans governments – from education boards to water districts.

How are these lesser funded or smaller municipalities and agencies meant to compete in an arena in which the success of policies and initiatives is tied to the amount of taxpayer money that can be spent on lobbying?

Allowing the continuation of publicly funded lobbying can have adverse effects within the government as well. It can inadvertently lead to an arms race of lobbying between different parts of the government and between different cities and municipalities and can exacerbate inequality of resources and power. If taxpayer-funded lobbying is viewed as a valid tool for success, then public agencies and commissions have incentives to continue spending more and more often in order to lobby for their personal interests instead of for those in the best interest of the people. If one municipality or board competes with another, the resulting spending spree – at the taxpayer’s expense – could be vast, as these organizations hire additional private lobbyists to vie for the resources.

If the practice of using taxpayer-funded lobbying is ever found empirically ineffective, then the case that it is wasting taxpayer money grows even stronger. If ever proven effective, it still poses a set of issues and threats, especially to poorer governments and entities that have fewer resources at their disposal. How are these lesser-funded or smaller municipalities and agencies meant to compete in an arena in which the success of policies and initiatives is tied to the amount of taxpayer money that can be spent on lobbying? Accepting a culture of publicly funded lobbying would certainly hurt and reduce the efficacy of governmental entities that don’t have substantial tax revenue from which to draw.

Finally, whereas private corporations might discontinue unsuccessful lobbying efforts in order to save their own money, public entities are not bound by a similar financial constraint. Tax revenue will accumulate regardless of lobbying outcome, and therefore the funding for inter-governmental lobbying is seemingly infinite. The little to no oversight of the operating procedures of many of the government-independent boards and commissions allows them to pursue this practice with virtual impunity.

When all is said and done, this practice is a gross misuse of public funds that betrays taxpayers and the institutions and ideals on which this country was founded. Lobbying has already invaded government and policy-making through the private sector, as unfortunately is its prerogative, but it has no place interfering with public affairs and public funds. Several states have paved the way for enacting policy that regulates and ends this wasteful policy, but there is still a long way to go. Many states such as Texas have tried passing similar laws in an effort to protect taxpayer dollars and advocate for the proper use of such funds, but they have not succeeded. Unsurprisingly, these efforts are met with strong opposition from lobbyists, who work hard to block legislative reforms that might reduce their own business opportunities within the government. The battle of lobbyists and special interests versus reform will continue next year, as lawmakers in Texas and other states have already proposed new legislation to put an end to this practice.

The misuse and lack of oversight of public funds have no place in the public works of our government. The time has come to put an end to governmental entities using public money and taxpayer dollars in order to advance their own agendas and interests. As long as this lobbying is allowed to continue, it will be done at the expense of the taxpayer and average citizen, whose money is being misused and whose voice is being stifled as private lobbyists line their pockets to advance special interests.


The first round of elections for the Majlis, Iran’s national parliament, changed the dynamic for political reformists throughout the country. 83 candidates from The List of Hope — an informal coalition of moderate and reformist candidates — claimed victory in the first of round of elections, an increase of over 50 representatives for the party. One particular characteristic of some candidates quickly garnered domestic and international attention: 14 female politicians secured seats in the Majlis, and seven more will be in the second round of elections — or runoff vote — in April. Immediate reactions to this outcome, particularly in Western media, sparked headlines that framed the results as either a dramatic victory for women, as well as the List of Hope in general, or an insignificant event within the larger landscape of the semi-autocratic Iranian government.

Yet, both sides of the political coverage missed the true significance of the election. The outcome represented over a 50 percent increase in female representation and also reminded Iranians of the continued structural limitations of their own government. While the increase in female Members of Parliament (MPs) does not reflect the Iranian population’s prioritization of female representation, these women weren’t elected by accident. The victories of Iran’s women, moderates, and reformists in the recent elections need to be analyzed together to fully understand their implications. The elections do not paint a definitive positive or negative picture of gender relations in government. What they do establish, however, are conditions for substantial reform in the coming years. And this time, reforms may actively involve women in government. These developments suggest that these new conditions may spark meaningful reforms in the coming years.

There’s been a historical discontinuity between the rights of women in Iran and their participation in government. While there has been a steady increase in women’s access to education and employment, these advances have not translated into positions in political office. Despite President Hassan Rouhani’s reformist agenda and public rhetoric encouraging more women to sit in government positions, he did not appoint a single woman as a cabinet minister upon his election. The few women who were in parliament faced the constant challenge of being regarded as “ornaments” as opposed to serious politicians and often voted against their own interests.

While the election results have significantly increased the number of female members of parliament, these gains have already faced pushback from conservatives, many of whom are women. Notably, Fatemeh Alia, a conservative MP, lost reelection when she supported a law to ban women from viewing a volleyball match live, saying that it was a woman’s place to “stay at home.” Occurrences like this are not new: The few women who have made it into the Majlis in the past have mainly avoided or even worked against progress in the field of women’s rights. During President Rouhani’s regime, female MPs have had trouble initiating reform, suggesting that representation does not automatically spark change, or even a desire for it. Thus, the significance of the 14 female representatives’ victories, for women and for Iran as a whole, needs to be accompanied with opportunities for women to play meaningful roles in politics. As more women are elected and reformist parties gain political traction, that opportunity may have arrived. The public has voted female MPs across the country not only because they have voiced feminist policies, but also because they have campaigned on substantive and convincing reform measures.

The institutional prejudices against women in Iranian politics have been in place for many years, and can only be changed with cultural and demographic changes, both of which have been brought about by reformists.

Seyedeh Fatemeh Hosseini, a PhD candidate at the University of Tehran and youngest addition to the Majlis, embodies the successful combination of a female politician whose policy focus goes beyond just gender issues. As a member of the List of Hope, she campaigned with substantive views on global economic integration and an increased attention to the needs of the next generation of Iranians, of which she counts herself a member. Her classification as part of the youth vote has propelled her political career and helped earn her considerable support. Hosseini’s victory suggests that the increase in support for female politicians may be driven not solely by changed attitudes and institutional prejudices against women, but instead by a cultural and demographic change that coincides with the reformist movement.

Reformist and moderate MPs predominantly based their campaigns on the grand strategic plan known as Vision 2025. Both ends of Iran’s political spectrum hope to establish the nation as a regional power, but the reformists’ goal prioritizes a knowledge-based society, increasingly involved in international political, economic, and cultural forums. Vision 2025 is grounded by foreign investment in Iran’s people; supporters of the plan hope to establish regional dominance by educating, equipping, and motivating the population to compete at an international level. President Rouhani is already encouraging a corresponding increase in global investment, especially in information and communications technology. Through that policy, it’s no surprise that female candidates supporting a policy platform focused on education, jobs, and future economic growth attracted the youth vote.

Those goals have led Iranians to not only set up necessary conditions for reform, but to do so within a system that is still checked by remaining autocratic authorities, namely the Assembly of Experts and Supreme Leader Ayatollah Khamenei. Beyond the policies and rhetoric of these figures, women have faced even greater restraints on their participation in government from the Guardian Council, the authoritative and religious body of six jurists and six theologians that’s often considered to be the single-most influential body in the government. Embodying some of the greatest challenges to Iranian democracy, the Council has repeatedly disqualified women from running based on their interpretations of Islamic Law. The public’s ability to navigate the Council this February and place reformists and women in parliament indicates that, contrary to some popular belief, Iranian elections present a genuine opportunity of reform.

The runoff elections on April 29, 2016 will finalize the composition of the Majlis and have key impacts on how those reforms develop into substantial policies. Seven more female candidates may win office in districts across the country that failed to elect an MP with over 25 percent of the vote during the first round of elections. By nature, the runoff voting exhibits the diversity of Iranian political views. They particularly highlight the continued influence of conservative and hardline members of parliament and as the obstacles presented by the public as well as the government. Yet, the runoff elections also show the strength of Iran’s political process, because despite the opposing political and economic visions that divide these candidates, Iran’s limited democracy is becoming increasingly fair.

Going beyond its own borders, women’s roles in reforms may be crucial in shaping international responses to reformist policies. The List of Hope’s political, economic, and cultural initiatives are fundamentally tied to the international community. Iran’s inclusion in global economic and diplomatic forums depends on the willingness of the international community just as much as it depends on political will at home. As the List of Hope attempts to modernize Iran’s role in the world, it must close the gap between how the Iranian people envision their future and how international media often portrays the nation’s goals. Through this political and cultural shift, Iran may hope to demonstrate how a large, Islamic democracy in the Middle East can serve as a model for others in the region, similar to Turkey and Mauritius’ hopes for their former female heads of government.

It’s because of this political climate that the role of women in Iran’s government may hold the key to shaping the new reformist vision. Women and reform are tied beyond the proposed policies of the List of Hope; Iran’s female MPs may operate as a lens through which the international community views the state. If this political trend continues, and the newly elected officials assume a substantive role in reforms, Iran’s international image will be drastically closer to its desired identity. As in any democracy, reforms still need to be made, and the List of Hope appears poised to give it their best shot. While this isn’t the first time Iran has been on the brink of significant change, the presence and potential leadership of Iran’s women suggests that this time might be different. Only time will tell if Iran’s political system and policy dynamics will shift in favor of women.

Infographic by Quinn Schoen

For the last three decades, anti-debt polemics have been the cause célèbre of self-styled “fiscal conservatives.” Over the course of their crusade, deficit hawks have cultivated several strategies to reduce government borrowing and spending. They have tried everything from emotional appeals that invoke scary, big-sounding factoids to more serious econometric studies. Yet all of these approaches fall short upon closer examination. Contrary to the doomsayers, the national debt is not a national emergency—at least, there’s no reason to see it as such in the near or even intermediate future. In fact, austerity is probably the most ‘fiscally irresponsible’ move for the US at present.

It’s quite clear that cutting the national debt has become an article of faith for conservatives. Consider the words of Stephen Moore, the former chief economist at the Heritage Foundation: “In 2015 the US government ran up one of the largest budget deficits in history — borrowing more than $1 billion a day seven days a week and twice on Sunday.” With this folksy statistic as his only evidence, Moore proceeds to advocate for a decades-long regimen of budget-cutting. In his view, the goal of fiscal policy should be to aggressively reduce the national debt over the next two and a half decades until the “debt burden [is] down to … a safe zone.”

Here the intelligent reader should pause and demand elaboration. Why is $8 billion per week too much? It sounds big to the layman’s ears, but government spending always “sounds big.” Furthermore, even if this $8 billion figure does need to be trimmed, how do we go about identifying and justifying a “safe zone” for national borrowing? At no point are either of these claims fully explored as yardsticks for America’s carrying capacity for debt, and yet they are some of the most commonplace fiscal fallacies. The former — framing the debt in terrifying but irrelevant terms—comes in several forms: towering visuals of stacked dollar bills, evocative memes, and analogies that put the national debt in personal terms. These tactics are certainly riveting, but only because they provoke anxiety rather than sober-minded analysis. Unfortunately, such misleading devices are the most frequent and widely-believed ways of talking about the national debt.

Although Republicans are the primary culprits behind intimidating debt-related messaging, Democrats are guilty too. In a 1993 address before a joint session of Congress, Bill Clinton warned that “if our national debt were stacked in thousand-dollar bills, the stack would… reach 267 miles.” The total effect of all these vivid devices and depictions is to create wide-eyed, panicked voters that will back spending cuts. Ideally, people would instead question the macroeconomic relevance of “big” versus “bigger” stacks of dollars into space before they cast their ballots. After all, the average person’s impression of what’s large cannot answer the econometric questions surrounding debt sustainability. But given the state of modern political discourse vis-à-vis US government borrowing, it’s clear that many pundits have a vested interest in debasing the conversation with pathos.

Not all talk of America’s national balance sheet entails this sort of rhetorical flourish, however. There is also a second way of talking about the national debt — using the debt-to-GDP ratio — which has at least the veneer of respectability. The intuitive logic behind this metric is that GDP is a country’s income, and therefore, represents its ability to pay off debt. When debt grows faster than GDP, liabilities begin to outstrip income and the country gets closer to the brink of insolvency.

Of course, it’s not as simple as just stating a ratio: Facts without explanations are meaningless. Moore commits this oversight when he flatly asserts that America’s debt-to-GDP ratio is too high, but fails to elaborate on several crucial points. First, what determines the magic number for debt-to-GDP (Moore’s baseless claim is 50%)? Why exactly are current levels flirting with calamity, if they are at all? Economists struggle with the answer, partially because it is far from clear that there is such a universal “critical point.” An oft-cited 2010 paper on the topic by Carmen Reinhart and Kenneth Rogoff finds that gross debt starts to threaten growth when it reaches 90 percent of GDP. While alarmists have seized upon this figure, subsequent research has cast doubt on a hard-and-fast rule for government spending. Economists at the University of Massachusetts Amherst “replicate[d] Reinhart and Rogoff … and [found] that coding errors, selective exclusion of available data, and unconventional weighting of summary statistics lead to serious errors that inaccurately represent the relationship between public debt and GDP growth among 20 advanced economies in the post-war period.” Multiple other groups of researchers have joined the salvo, and no consensus exists as to when public debt actually begins to cripple economic activity. It is therefore difficult to aim for a safe zone that academic economists cannot identify and which might not be necessary.

Recent events have further belied rather than reaffirmed truisms about debt-to-GDP metrics. The countries that befell fiscal distress in the Eurozone crisis held gross debt that ranged from 40 percent to 110 percent of GDP before panic set in. Such a wide range does not neatly lend itself to meaningful lessons on proper debt levels. Additionally, some middle-of-the-pack countries like Germany remained oases of stability as others became sources of contagion—despite the fact that Germany’s debt-to-GDP ratio was 75% in 2009, 10 points higher than that of troubled Spain and 20 higher than that of distressed Ireland. Hence it’s not clear that austerity is a good or even necessary economic decision when debt-to-GDP ratios are seemingly high. That’s because austerity can cause recessions, and avoiding an uptick in debt by incurring economic harm is often a bad deal. Japan demonstrates that austerity need not be the go-to decision at even high debt levels; the country’s gross national debt is 243 percent of GDP, but with low interest rates, this value is still manageable and necessary. No one is fretting about a surprise Japanese default, and government spending helps to mitigate Japan’s ongoing economic weakness. In terms of creditworthiness, US government bonds are still incredibly reliable, maintaining an AA+ credit rating — even with current gross debt levels at around 100 percent.

While austerity might not be an urgent prescription for the US, there is a case to be made that current borrowing will force an eventual reckoning. The theory holds that government expenditures are unsustainable and will cause problems decades down the road. Corresponding cuts are needed at present in order to fix or forestall this eventuality. The Congressional Budget Office raised these concerns in a 2015 report on long-run fiscal trends, contending that starting around 2020, “debt [will] be on an upward path relative to the size of the economy. Consequently, the policy changes needed to reduce debt to any given amount would become larger and larger over time. The rising debt [cannot] be sustained indefinitely; the government’s creditors [will] eventually begin to doubt its ability to cut spending or raise revenues by enough to pay its debt obligations, forcing the government to pay much higher interest rates to borrow money.” This argument therefore warns that the government’s rising financial burden from debt will eventually outpace the growth of the nation’s economy.

The problem with this analysis is that predicting the macro economy—and the government revenue it provides—decades in the future is the social science equivalent of reading tea leaves. Nobel Prize Economist Paul Krugman has called such estimates an “especially boring genre of science fiction” due to their high variability, and Jared Bernstein, formerly a top economic advisor in the Obama Administration, writes that predictive economics fail beyond a 10-year horizon. If year-on-year growth ends up just a fraction of a percent higher than expected, the debt would be a nonissue. Alternatively, if the world is hit with another large recession, fiscal crises could erupt. While no one knows what will happen years from now (who predicted the Great Recession?), we can be sure that austerity today will harm employment and the economy at large. Does it really make sense to act on an uncertain and likely flawed prediction of fiscal health, especially when such action will cause almost certain economic stagnation in the present?

Perhaps the largest problem in using the debt-to-GDP ratio to justify spending cuts is that it’s an incomplete snapshot. While this ratio does invoke national income and indebtedness, it neglects a crucial variable: interest. The extent to which a country can pay creditors without falling into arrears is highly sensitive to interest rates. If it is the case that these rates are extremely low, borrowing is cheaper, since interest is what the government pays for the privilege of borrowing. Therefore, a drop in rates — say, due to the Fed’s response to an economic crisis — has the practical effect of mitigating the government’s financial liabilities. The US is still feeling the aftershocks of such a crisis, and US monetary policy over the last seven years has been constructed with this in mind. Hence government borrowing has never been cheaper.

The graph to the right, made using Federal Reserve Economic Data from the Federal Reserve Bank of St. Louis, demonstrates as much. It depicts net interest payments as a percentage of total federal revenue over time. The current figure is in the ballpark of 7 percent, which is lower than it’s been in the last four
fredgraphdecades. Notice how the graph remained steady during the recession. That’s because, although tax revenue decreased due to economic contractions, the Fed reduced interest rates enough to compensate. While many fiscal conservatives claimed that the Great Recession meant that the federal government needed to cut back, the ironic truth is that the aftermath of the Great Recession has helped expand the US government’s short-term ability to sustain debt. Although interest rates are starting to inch up, this leeway still very much exists: Rates are far from normalized, the deficit is 70 percent lower than its 2009 recession peak, and continuing economic weakness makes government borrowing a worthwhile tool.

Given the aforementioned evidence, running a budget surplus and chipping away at the national debt does not seem to be immediately necessary. In fact, it might even be self-defeating; budget cutting could actually exacerbate the debt situation. Krugman gave an excellent exposition of this very idea in the New York Times, using Greece to describe the negative consequences of cutting spending without help from monetary policy. He argues that because austerity hurts the economy and thereby reduces tax revenue, it both raises and costs money. Accordingly, rapidly moving from a deficit to a balanced budget shrinks GDP without immediately decreasing the debt. As Krugman points out, this means that the debt to GDP ratio initially goes up in an economy weakened by austerity, because GDP drops while the debt remains the same. This is exactly what’s happening in Greece, where attempts to raise the surplus by one percent could cause a five-point rise in the debt-to-GDP ratio. The fiscal situation worsens even more after accounting for the deflationary effects of austerity. When cutbacks hurt economic activity, price levels begin to decline. A trend of decreasing prices causes people to delay purchases, since their dollars are worth more tomorrow than today in real terms. Less consumption and more savings exacerbate already anemic demand. The result, as Krugman states, is a smaller economy with the same debt — a categorical loss.

The only exception to this pattern is if monetary policy can lower interest rates and mitigate the economic costs of tax increases and/or lower spending. But interest rates are stuck at near-zero percent and cannot go significantly lower — negative rates would take from savers and investors, causing them to withdraw their money from the financial system. So the Fed is no help, and proposed budget cuts would cripple aggregate demand in its already weak state; America would only be backpedaling. This is exactly what we see when we look at the data: Greece’s debt-to-GDP ratio has failed to fall despite several rounds of harsh austerity. Instead, budget slashing simply caused its real income to crater, dropping by 25 percent in just a few years. It seems that fiscal conservatives have formulated a plan of one step forward, two steps backward.

There is a level on which austerity appears sensible for the United States. However, it is a level fraught with intuition, misdirection, and misunderstandings. Rationales for rolling back spending come with the sheen of responsibility. They seduce both lay people and economists. They feed hyper partisanship and help politicians posture. While these approaches are persuasive, their logic is flawed. It is public policy malpractice to see dangers where there are none —doing so raises the risk that the United States flees toward even greater fiscal and economic hazards. So the next time some politician, polemicist, or Average Joe rails against the big spenders in Washington, do not be seduced. Anti-spending crusaders are touting a solution in search of a problem.

On September 8, 2014 — three days before the 41st anniversary of the violent coup that overthrew President Salvador Allende and brought General Augusto Pinochet to power — Santiago, Chile was again rocked by violence. That morning, the Chilean Supreme Court ruled against an appeal from a group of Marxist-Leninist radicals serving time for the 2007 murder of a police officer during a botched bank robbery. In the lead-up to the ruling, radical groups throughout the country had warned that the appeal’s denial would result in retributive attacks.

At roughly 2 p.m., a fire extinguisher filled with gunpowder exploded in an underground shopping center at the Escuela Militar metro station in the affluent neighborhood of Las Condes. The bomb injured 14 civilians and unleashed panic throughout the city. Ten days later, the Conspiracy of the Cells of Fire (CCF), an underground anarchist terrorist group, published a short manifesto claiming responsibility for the Escuela Militar metro attack.

Though the attack was tragic, it was not the first of its kind. The Escuela Militar bombing was one of at least 30 terrorist attacks in Santiago that year alone, and since 2004, over 200 bombings have rattled the city. In some ways, these incidents are relics of the 17 years of oppressive military rule under Pinochet following the 1973 coup. Under Pinochet’s regime of forced disappearances and political oppression, the country’s civil society structures — through which marginalized groups could voice their dissent and participate in governance — broke down.

The country’s legacy of political violence continues today, in large part due to Chile’s mishandling of violent threats and archaic antiterrorism laws. But the Chilean people are demanding drastic change, both through broad shifts in existing policy and, perhaps, the re-imagining of the country’s constitution. The Escuela Militar bombings and other attacks of its kind are a reminder of Chile’s checkered past, and if the nation is to fully move past this troubled history, the government must repeal and replace its antiterrorism laws and treat peaceful protest as a critical channel for political engagement.

By most measures, Chile has seen a great deal of success since its transition to democracy in 1990. The country boasts the highest GDP per capita in South America, a figure that has more than quadrupled in the last quarter-century. The nation’s political institutions also rate as some of the strongest in the region. In 2009, a paper from the Inter-American Development Bank characterized Chile as a country that has successfully embraced democratic institutions and dismissed protests as “sporadic and…[far less relevant] to the policymaking process in general.” Moreover, the Economist recently named Santiago the safest city in Latin America. Nevertheless, the last two Chilean presidents have failed to create the kind of programmatic reforms that citizens have demanded. The civil unrest and anarchist attacks that have plagued Santiago for the past 11 years challenge the popular perception of Chile as a thriving modern democracy and rising economic power.

The Chilean success story looks much less rosy just beneath the surface. The country is ranked one of the most unequal in the world, and 14.4 percent of its population lives in poverty. Support for Chile’s political establishment is also declining: A 2015 study by researchers from the Pontificia Universidad Católica argues that in Chile there is “a growing distance between political parties and the society, in parallel with an increased criticism of electoral processes and representative institutions.”

A proliferation of protest movements and incidents of civil unrest in recent years reflects a growing sense of alienation. Today, 71 percent of Chileans support drafting a new constitution, reflecting the growing hunger for change. In addition to the wave of anarchist attacks that have rocked Santiago since 2004, Chile has also grappled with a sometimes violent indigenous rights movement as well as widespread student demonstrations, beginning with the so-called “Chilean Winter” of 2011. These three movements, in the words of Brown University Professor Arnulf Becker Lorca, “are all connected by general discontent” with the Chilean government’s failure to adequately represent the will of its people.

Perhaps no group has felt that discontent longer than the Mapuche. With over 1.5 million members, the Mapuche are Chile’s largest ethnic minority and have long been politically marginalized. Beginning in 1852, the Chilean government systematically and unilaterally imposed its sovereignty over the Mapuche, who would go on to face a century and a half of political, economic, and social dispossession. Since the country’s return to democracy in 1990, Mapuche activists have sought more political autonomy at the local level. But the Chilean government has often seen the Mapuche’s indigenous rights movement as being at odds with the push for a developed, modern economy.

Beginning in the 1990s, Mapuche activists began protesting large development projects, such as the construction of hydroelectric dams, on land that is culturally significant to their people. The government has mostly ignored the demands of indigenous groups and has even gone so far as to arrest antidevelopment activists and protestors. According to Mapuche activist José Naín Curamil, more than 250 Mapuche have been detained by the government, including Naín himself. In addition to arresting activists, the Chilean government has also continued to plan and develop new hydroelectric projects, an agenda that one indigenous advocacy group calls “a true slap on the face of human rights and [the] interest of the region’s inhabitants.”

Compared to the Mapuche rights movement, the Chilean Winter is a much younger and more popular political protest. Since 2011, student protestors have taken to the streets in cities across the country to demand policy changes. The Chilean Winter first aimed to address high university tuition rates, which represent 2 percent of Chile’s GDP — the second-highest rate in the world — and other failures of the Chilean education system, such as the nation’s privatized schools and underperforming teachers. These efforts have since spurred mass protests against everything from metro fares to laws banning abortions.

In contrast to the anarchist bombings or the indigenous rights movement, the Chilean Winter protests have had a more significant impact. The BBC stated in a 2014 report that, aside from the Escuela Militar metro attack, anarchist bombings were a “nuisance for Chileans rather than a serious threat to public safety.” The student protests, however, have been impossible to ignore. One protest during former-President Sebastián Piñera’s administration brought 150,000 students, professors, and other demonstrators to the streets to demand education reforms.

As a result of Piñera’s reluctance to embrace the reforms demanded by protestors, his party was swept from power in 2013. Voters replaced Piñera with current President Michelle Bachelet, a progressive icon of the Chilean left. Bachelet had previously served as president from 2006 to 2010, and her return to power was aided by promising many of the reforms demanded by protestors on issues like women’s rights and education. While Bachelet initially succeeded in passing a major school reform, progress has slowed, as the president’s attention has been refocused on other issues, such as corruption and a major recession.

The Escuela Militar bombing was one of at least 30 terrorist attacks in Santiago in 2014 alone, and since 2004, over 200 bombings have rattled the city.

However, frustration hasn’t always been channeled into peaceful protest. While all three cases of protest began with the same grievances with the current state of political affairs, the violent Chilean anarchist campaign differs in key ways from the Mapuche rights movement and the Chilean Winter. Mapuche and university student protests have at times turned violent, but these movements have sought to distance themselves from the degree of violence used by anarchist terrorists in the Escuela Militar metro attack.

Historically, Chilean anarchists have tried to avoid civilian casualties. Bombs have typically been detonated in the dead of night, when bystanders are less likely to be caught in the ensuing blasts. The targeting of the underground shopping center last year represented an enormous break with precedent. Anarchist bombers have historically targeted banks, government buildings, and churches — structures that represent institutional systems opposed by anarchism. The symbolism of the Escuela Militar metro attack is murkier. The CCF, in claiming responsibility for the attack, denounced the metro’s shopping center as a symbol of “bourgeoisie commercialism.” Beyond this general commitment to terror tactics and the shared philosophy of anarchism, it’s unclear what holds the violent anarchist movement together. Unlike the Mapuche rights movement or the Chilean Winter, Chile’s anarchist terrorists in the CCF have not yet articulated programmatic policy objectives or an agenda. Instead, they push for broad and diffuse goals, such as an end to consumerism or political oppression. This organizational weakness has made it possible for other political actors to tie the violent anarchist attacks to almost any political or social agenda that suits their need. Following the Escuela Militar metro attack, a handful of left-wing politicians immediately speculated that the incident was a false flag operation on the part of right-wingers to discredit the Chilean left. On the other side of the political spectrum, the right-wing intendant of the Bío Bío region claimed in 2011 that the attacks and other forms of social unrest could be traced directly to mothers having children out of wedlock, saying: “Chile is a country without a family.”

While these explanations may work well as political talking points, they fail to adequately explain the reasons behind this 11-year-long string of bombings. Anarchists aren’t attacking banks, churches, and government buildings because of a right-wing conspiracy or a weakening of family values. Although they differ in their choice to use violent methods, anarchists are bombing the infrastructure of a political and economic system that they, like Mapuche and Chilean Winter activists, believe has failed the Chilean people.

Events like the Escuela Militar bombing will continue until the Chilean government can effectively prosecute bombers and put them behind bars. Thanks to half-measures and the state’s reluctance to address its legacy of political violence, terrorists have been allowed to walk away from attacks without facing serious consequences for their actions.

The case of Mónica Caballero and Francisco Solar is a good example. Between 2006 and 2010, Caballero and Solar took part in an extended campaign of anarchist terrorist attacks known as the “Casos Bombas,” which would ultimately include 30 bombings of churches, government agencies, banks, and other targets. They were arrested in the summer of 2010 following an especially high profile attack just blocks from then-President Sebastián Piñera’s house.

The government was initially thought to have a strong case against the attackers and chose to pursue charges of terrorist conspiracy against Caballero and Solar. However, the prosecution’s sloppy handling of the trial ultimately led to the case being dismissed two years later. The prosecutors and judicial officials responsible for the bungling of the trial faced fierce criticism at the time, with former Interior Minister Andrés Chadwick stating, “I believe that some of our courts of justice owe an explanation.” The state’s failures in the Casos Bombas would come back to haunt it just two years later when Caballero and Solar were connected to a terrorist attack, this time on the Basílica Pilar de Zaragoza in northwestern Spain.

The reluctance of the Chilean judiciary to try bombers as terrorists stems from Chile’s complicated history of terrorism. Until 2004, Chile had not experienced a major terrorist attack since the waning days of the Pinochet regime. The law used to prosecute terrorists today is the same antiterror law used by Pinochet to illegally detain political prisoners in the 1970s and the 1980s. During his 17 years in power, Pinochet oversaw the forced disappearances of over 3,000 Chileans and used this antiterrorism law to imprison 40,000 political dissenters. Not only is the law outdated and ineffective for addressing today’s terrorist threats, it is also widely unpopular in Chile due to its questionable history and extreme provisions. The law allows police to keep suspects in solitary confinement indefinitely without leveling charges against them and allows for the use of wire-tapping and secret witnesses in investigations. The Chilean public overwhelmingly opposes the law, making it very difficult to put accused bombers, like Caballero and Solar, on trial for terrorism charges.

In the ten years leading up to the Escuela Militar metro attack, the Chilean government jailed only one individual on terrorism charges. Others, such as Caballero and Solar in the Casos Bombas, had been brought to trial but ultimately had their charges dismissed or their cases thrown out. One accused bomber, Luciano Pitronello, was brought to trial in 2012 after he attempted to blow up a bank in Santiago. The explosive detonated in his hands, forcing him to seek medical attention and foiling his plan. After being brought to trial on terrorism charges, Pitronello was ultimately sentenced to six years of probation and had all charges carrying a prison term dropped, despite the fact that all of his actions had been caught on camera.

The Chilean government’s inability to prosecute anarchist terrorists under the Pinochet-era antiterror law raises the important question of why the law hasn’t been successfully changed or replaced with newer, less controversial legislation. In fact, ever since Chile’s return to democracy, there have been efforts to do precisely this. Piñera instituted some reforms to the law in 2010, but activists argued that these changes did little to alter the overall nature of the legislature. In both her 2005 and 2013 campaigns, Bachelet opted to distance herself from the law altogether, saying that Chile “does not need [an] antiterrorism law” and that, if elected, she would rely on other statutes to prosecute terrorist crimes.

Both Piñera and Bachelet kept the law on the books, however, and used it — albeit largely unsuccessfully — against accused terrorists. The Piñera administration used the law in its prosecutions of Caballero, Solar, and Pitronello. Ultimately, the administration failed in its efforts because of fierce opposition from the Chilean judiciary system — not because of any unwillingness to charge suspected criminals using the controversial law. Bachelet broke her campaign promise not to use the law during both of her terms, invoking it against Mapuche activists as well as against the Escuela Militar metro bombers.

In the immediate aftermath of the Escuela Militar metro attack, the Chilean government failed to provide the full-throated denunciation of the attacks that many had hoped for. President Bachelet at first claimed that, even after the bombing, “it can not be said that there is terrorism in Chile,” despite the fact that the bombing was a clearcut example of an organization inciting terror by attacking civilians and infrastructure. After a cabinet meeting later that day, however, a Bachelet administration spokesman decried the attack as “an act of terrorism” and announced that the government would work to bring the attackers to justice using the controversial antiterrorism law.

Sure enough, the Chilean government followed through, arresting three suspects — Natalie Casanova Muñoz, Juan Flores Riquelme, and Guillermo Durán Méndez — within two weeks of the attack. Following the arrests, the government kept up its tough-on-crime rhetoric and appeared to be close to the verdict that it needed to regain order and silence its critics. But as time has gone on, the Chilean government seems to have fallen back into its old habits. There has been no new effort to prosecute terrorists, and no verdict has been handed down for Casanova, Flores Riquelme, and Durán. Fourteen months after their arrest, the three are still being detained without trial under the antiterrorism law.

In September 2014, student protest leader turned politician, Francisco Figueroa, summarized the root causes of civil unrest in Chile, saying, “it isn’t just a problem of Sebastián Piñera’s government, this is actually an institutional problem.” The continued use of the deeply unpopular antiterrorism law by both Piñera’s right-wing government and Bachelet’s left-wing government speaks to this institutional problem itself. Despite a population clamoring for reform, Chile’s leaders have continued to use the law. For many Chileans, every instance of its usage evokes the repression of the Pinochet era and thus undermines the country’s democratic progress.

The Escuela Militar metro attack presents a crossroads for Chile. In the face of a devastating terrorist attack born of a general sense of dissatisfaction with Chile’s supposed success as a developing democracy, the Chilean government must answer a clarion call to reform the way that the country tackles both political violence and responds to political protest. If the Bachelet government fails to answer this call to action, it risks losing the confidence and support of the Chilean people. Perhaps nowhere is this clearer than with the University of Chile Student Federation (fECH), the leading organization in the Chilean Winter student protests. In 2013, the fECH elected Melissa Sepúlveda to the organization’s presidency. Sepúlveda, an avowed anarchist, represents a radical turn for an organization at the heart of Chile’s political future. In a radio interview shortly after the 2013 elections, Sepúlveda made clear that the fECH expected little reform from the government, declaring, “the possibility for change is not in Congress.”

Nonetheless, there is some reason for optimism. Bachelet recently announced a major reform that could give even a critic as hardened as Sepúlveda a reason to hope for change. In October 2015, Bachelet declared that her government would begin the process of replacing Chile’s constitution, a document that dates from the Pinochet dictatorship. In doing so, Bachelet is creating an opportunity for Chileans who feel excluded from the political process to weave their voices and their concerns into the very fabric of how the government operates. It offers the Chilean government an opportunity to rewrite its outmoded antiterrorism law and replace it with something that better represents the model of democratic progress Chile strives to be.

In Libya, the Islamic State yet again rears its head. Though the opening of this new frontier does mark a threatening expansion of ISIL, it is altogether more indicative of the weak Libyan state into which the group has spread. ISIL has easily established itself in the fractured state, but its presence should not be treated in the same manner as in Syria and Iraq, with bombs and air raids. If ISIL is to be driven out of Libya, the Libyans themselves, as well as the international community, must instead focus on addressing the fractured nature of the country and work towards the establishment of a unitary and representative government.

The Islamic State announced its arrival in Libya with the release of a video in which the group barbarously beheaded 21 Egyptian Coptic Christians. These killings invoked the Egyptian government to promise to “avenge the bloodshed and to seek retribution.” Soon after, Egyptian air strikes on ISIL strongholds were launched, killing an estimated 40 to 50 individuals in ISIL controlled areas. This response achieved little in regards to pushing the Islamic State out of Libya or addressing Libya’s weak state capacity.

On March 5, as Tripoli recovered from the most recent bombings, UN-sponsored talks were launched in Rabat in an attempt to reconcile the two rival Libyan governments, the Council of Deputies in Tobruk and the New General National Congress. Both governments claim legitimacy as Libya’s central governmental power and have hitherto refused to negotiate. Nonetheless, according to Bernardino León, the Head of the United Nations Support Mission in Libya, (UNSMIL) “There is a sense, of, if it’s not optimism, at least a sense that it is possible to make a deal, and that is something very important because in the last months, this was not the case.”

The resumption of talks may signal a renewed effort at diplomacy and addressing the structural issues at hand. And, despite Egyptian President El-Sisi’s calls to launch an international military intervention force, diplomacy and assistance represent the ultimately best means by which the rest of the world can assist Libya.

Post-Revolution Libya

In 2011, the Arab Spring saw the rise of mass protests throughout Libya against the dictatorship of Muammar Qaddafi, calling for his ouster and a more representative government. With the help of a NATO bombing campaign in October 2011, the interim government established after the fall of Qaddafi, the National Transition Council (NTC), declared Libya to be “liberated” and announced plans to hold democratic elections. However, since the overthrow of Qaddafi, Libya has yet to find its footing.

In March 2014, the General National Congress, which took over from the initial transitional government, voted to establish the Council of Deputies (also known as the House of Representatives) to replace itself, all amidst rising public discontent with the government. Elections were held soon after. Both the low turnout rate and the high numbers of elected Federalists and Nationalists, in comparison to the 30 percent of seats gained by Islamic groups, gave rise to renewed protests and violence particularly amongst Islamic political groups and militias.

In May 2014, General Haftar of the Libyan National Army, affiliated with the Council of Deputies, launched attacks in operation “Libyan Dignity” against the terrorist group Ansar al-Sharia in Benghazi. This group is considered a powerful Islamic militia faction, and is also thought to be responsible for the murder of the US Ambassador to Libya, J. Christopher Stevens, in that city in 2012. The “Libyan Dignity” campaign launched a civil war, causing Islamist political groups and militias to unite as a movement called “Libya Dawn” against an increasingly alienated government in Tobruk. The Libya Dawn launched an offensive in the eastern region of the state, eventually captured Tripoli in August 2014 and established a new, autonomous government.

Today, Libya remains divided between the two powerhouses: the New General National Council in Tripoli, and the internationally recognized Council of Deputies government in Tobruk. Both governments have their own infrastructure, including separate central banks and parliaments, and each control 10 percent of Libyan land. The rest of Libyan territory falls into the hands of small religious and tribal-based militias, many of which are loosely affiliated with either one of the governments.

The involvement of interventionist forces fundamentally altered the dynamics of power in Libya, both between Qaddafi, the opposition and  within the opposition itself, resulting from the favoring of specific rebel groups.

International Intervention

The 2011 revolution and armed conflict that removed Qaddafi from power was capped by a NATO-led air assault on the leader’s power centers in Libya. These air raids proved highly effective and significantly contributed to the overthrow of the Qaddafi dictatorship. Nonetheless, the intervention was criticized for overstepping its mandate as stipulated in UN Resolution 1973 and the “Responsibility to Protect” doctrine. What is perhaps more damning are the conditions in which NATO forces left Libya. The involvement of interventionist forces fundamentally altered the dynamics of power in Libya, both between Qaddafi, the opposition, and within the opposition itself, resulting from the favoring of specific rebel groups. Such changes in governance and leadership were not responsibly monitored. At the end of the NATO intervention, Libya was left to reorganize on its own.

The only main international force still in Libya today is UNSMIL, which established operations in March 2014. It has hitherto had little success in uniting the competing Libyan factions to form a unified government. However, the most recent round of UN talks in Rabat are promising in that both rival governments are willing to partake in dialogue, a feat thought unthinkable only a month ago.

2011’s intervention should not be repeated. Instead, the international community must support UNSMIL in its effort to reconcile the two rival governments. Moreover, the covert roles of other states such as the UAE and Saudi Arabia in funding militias and launching bombing campaigns, particularly against the Libya Dawn government in Tripoli, must end. Instead an international response such as that taking shape in the form of the UN talks in Rabat must further materialize. To reinforce the legitimacy of these talks, other institutions such as the European Union, African Union (highly critical of the 2011 NATO intervention) and the Arab League must too play roles.

ISIL should not be fought in Libya as it has been in Syria and Iraq. Instead, the Libyan government itself must address the presence of the Islamic State, together with the support of the international community. The recent airstrikes launched by Egypt should not serve as the international community’s response, and the calls of El-Sisi for military intervention should not be heeded. ISIL has festered in the cracks of Libya’s highly fragmented state. Instead of further widening such divides through military intervention, these fractures must fast be addressed.

While a recent Gallup poll showed that 50 percent of the public prioritizes environmental issues over economic growth, only 24 percent identify it as a critical policy area. This indicates that although the public highly values the environment, it doesn’t approach the issue in a political sense so much as a cultural one. Much of today’s widespread environmentalism seems increasingly passive and employed on the individual or communal level — perhaps due to repeated setbacks on the political stage. After all, the intense debate over the Keystone XL pipeline, the struggles within the EPA to develop better pollution rules, and the repeated failures to pass carbon taxes and cap-and-trade programs may have tempered the enthusiasm and resolve of the environmental movement. These unsuccessful maneuvers indicate the American public’s waning belief in the government’s ability to be proactive in the environmental realm. What’s needed now is an intervention.

Lately that intervention has manifested in a new and especially powerful tool for environmentalists: geoengineering, or the deliberate alteration of environmental processes. Perhaps “new” is a misnomer. While geoengineering has experienced a resurgence — it’s currently being used in California to combat drought — the strategy has actually been implemented for decades. Cloud seeding, the most common geoengineering technique, is the attempt to influence the amount or type of precipitation through the atmospheric dispersion of substances. It originated in the 1830s, when James “Storm King” Espy, believing that the smoke would stimulate rain, suggested that the US government burn down forests. A century later, showmen in the West launched rockets containing catalysts into the clouds to induce artificial precipitation. The practice reached its peak in the Depression-era Dust Bowl, and while it would sometimes produce rain, it didn’t stave off drought.

Though cloud seeding was born from economic desperation, it came to unexpected prominence as a military technique. During the Cold War, the US military became increasingly interested in the wartime opportunities that geoengineering provided. That research came to fruition during the Vietnam War; in order to limit the movement of North Vietnamese forces, the military dropped silver iodide flares — thought to cause rainfall — over enemy territory. The project, dubbed “Operation Popeye,” was meant to slow the efforts of the Vietnamese army to move men and supplies during the dry season. Instead, the effect of the cloud seeding fell on civilians and likely caused the catastrophic floods and typhoons in North Vietnam that devastated much of the country’s harvest in 1971.

These unintended consequences of geoengineering demonstrate two key principles. First, weather modification can be successful, but dangerous. Co-opting the environment — whether through dams or silver iodide dispersion systems — is always risky. Second, militarized weather modification constitutes a total war strategy, as the attacks affect both military and civilian figures. This reveals something unique about weather modification: It can seem benign when wrapped up in environmentalist packaging, but even brief military experimentation reveals the ominous depth of geoengineering’s effects.

In light of this troubling reality, the United Nations created the 1977 Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, a declaration with 48 signatories — including the United States — that bans weaponized or hostile use of environmental modification. Regardless, the US military continued to pursue domination of weather modification techniques well after the convention; in the late 1990s, the US Air Force Academy produced a paper entitled, “Weather as a Force Multiplier: Owning the Weather in 2025.” The report, often referred to as Air Force 2025, details a series of futuristic systems that the military could develop in order to control weather patterns as a strategic asset. The Cold War may have been the beginning, but Air Force 2025 ensured that interest in weather modification did not end with the fall of the Berlin Wall.

Cyclic storms on demand or Zeus-like thunderbolts dropped from drones may not be realistic, but it’s easy to see why the military had experiments to that effect in mind. As Air Force 2025 points out, “A tropical storm has an energy equal to 10,000 one-megaton bombs.” The bomb dropped on Hiroshima released only 0.016 megatons of energy. However, no storms-on-demand will be cycling through anytime soon, as the report’s team largely failed to spark a weather modification revolution, and the technology needed remains far in the future. But some of the paper’s goals, at least for hyper-accurate weather monitoring, are nearing completion, and the success of the military in this field indicates a sustained strategic interest in the environment as an asset. The radar communications technology credited with a major role in the 1991 Gulf War, for example, has its roots in weather radar research. And it’s the military’s environmental monitoring technologies that produce the nighttime satellite imagery critical to US efforts to aid recuperation from natural disasters.

But weather monitoring isn’t without its skeptics, who view it as a conspiratorial cover-up or a next step towards governmental weather control. In March, approximately 17,000 activists in Australia turned up to protest the country’s current government. Among those was an odd constituency with an imaginative message: America was controlling Australia’s weather. The High Frequency Active Auroral Research Program (HAARP), funded by the US Defense Advanced Research Projects Agency (DARPA), is a project that government officials have repeatedly said is designed for weather monitoring, but many have suspected it of being for weather control. HAARP is also reportedly part of US radio communications and surveillance projects. Because of its secretive nature, it has been accused of everything from disabling satellites to mind control, the Gulf War Syndrome and the destruction of the space shuttle Columbia.

This kind of paranoia about weather manipulation demonstrates the pervasiveness of today’s skepticism towards government involvement in environmental issues. While there is much to admire about environmental noninterventionism’s focus on local and private solutions, the conspiracy theorists at the movement’s fringes have negatively influenced attitudes towards geoengineering. DARPA probably isn’t using HAARP for mind control, but with the political nonstarter of conspiracy theory attached to it and other weather monitoring and control programs, there is now a distinct lack of pressure from the public for politicians to seriously consider even the mildest of geoengineering solutions. In short, the suspicion and distrust of a few have been a constant barrier to useful research and even-handed experiments in the environmental field.

As fantastical as some geoengineering plans sound, many come from a place of scientific rigor and could provide major environmental benefits if taken seriously. Take Nobel Laureate Dr. Paul Crutzen’s idea to shoot sulfate aerosols, which have a demonstrated atmospheric cooling effect, into the stratosphere in order to block sunlight and combat climate change. Although Crutzen’s editorial surfaced in 2006, the fundamental idea for weather modification as a climate change combatant is old news. In 1965 President Lyndon Johnson received a report titled, “Restoring the Quality of Our Environment,” which suggested research into the possibility of initiating climatic effects to counterbalance the atmospheric concentration of CO2. Yet it has only been in the past decade that these possibilities have been more fully explored. Since then, a larger body of academic work has demonstrated the long-term feasibility of Crutzen’s plans. The key phrase here is long-term feasibility; there is no clear consensus among scientists — or politicians — on the viability of current geoengineering technologies, or on which proposals may be the best and most cost-effective. Crutzen’s work is no exception, especially given the difficulty of aerosol delivery and distribution.

Geoengineering’s problems also bring with them the probability of dissent. Gauging consent among those affected will be difficult. Efforts like space reflection mirrors or stratospheric aerosol release can’t be localized, raising questions about whether any one country has the right to use these technologies, as well as questions about who would bear their cost. Smaller projects that would circumvent the ownership debate have their own problems. When California’s cloud seeding recently came to light, controversy followed, despite the fact that the state had just experienced one of the driest years in its history. Artificially induced rainfall effectively “steals” rain from surrounding areas and, if normalized, would cause a redistribution of rainfall between locations. As such, a complex debate about water rights surrounds the practice.

Conspiracy theories aside, there are scientifically rigorous critiques, which portray geoengineering as just another Band-Aid — albeit a big one. Modification is simply an adaptation strategy for dealing with environmental concerns, and while it would likely be effective in mitigating the impact of, say, climate change, it doesn’t tackle the source. As Dr. James Lovelock, a prominent environmental scholar, observed, “Consider what might happen if we start by using a stratospheric aerosol to ameliorate global heating; even if it succeeds, it would not be long before we face the additional problem of ocean acidification. This would need another medicine, and so on. We could find ourselves in a Kafka-like world from which there is no escape.”

This is not a trifling problem for the practice of geoengineering. Then again, neither is climate change a problem to be trifled with. It demands attention, and ultimately pressure, from a public that is content with leaving environmental activism in the realm of ecotourism and organic yogurt bars. After all, climate change is already Kafka-esque; it’s estimated to create 50 million environmental refugees by 2020. Consideration of geoengineering is especially prudent given that the ideal solution to global warming — one that would tackle the source of the problem — has so far proved impossible to find. It is time to approach this global issue both creatively and persuasively. Weather modification, though more salve than cure, may prove to be just what the doctor ordered.


Art by Olivia Watson

The Sochi Games have closed, and I am sure many citizens of Russia are wondering: was it worth it? The Olympics are expensive and disruptive to the host country. However, they are exceedingly popular with voters and bring a great sense of pride. They can also have huge symbolic power. The 1992 Barcelona games, for example, symbolized how Spain went from a dictatorship to an Olympics-hosting democracy in just 17 years.

So, many debate whether the or not the Olympics were worth it.

One piece of this debate played out last year in Atlanta. The Braves are leaving Turner Field, the stadium from the 1996 Olympics, for the suburbs. The Braves leaving the city limits represents a blow to downtown Atlanta. Rembert Browne, Atlanta native and ESPN writer, wrote a beautiful piece about this on ESPN’s Grantland website. Do go read it. However, the Braves moving does give one a sense of the working life of a stadium in America these days: 17 years. Turner Field was built brand-new for the 1996 Olympics, and current plans are to demolish it after the new stadium is built.

Occurrences such as this are why a publication like The Economist, an outlet as British as can be, refused to endorse the Olympics coming to London. They said to let Paris handle the hassle. They estimated the Beijing Games cost $40 billion, in addition to the disruption of daily life in Beijing. Many of the detractors of the Olympics also point to Greece, which hosted the 2004 Olympics. Most of the facilities created for that event are no longer even used. They say the event was a one-time shot for the country, not something that generated sustained tourism or international interest.

Some cities even offer a sort of counteroffer to the Olympics. New York put out a marketing campaign in 2012 around “skip the Olympics, come to New York,” trying to lure some international tourists away from the expensive Olympic crowds.

For a test of how much development happens due to a major event, we can look to Brazil. This year, Brazil will host the World Cup, followed by hosting the 2016 Olympics. If, at the end of 2016, things are looking up in Rio, proponents of the Olympics will say the games provided a major boost to the city.

We shall see, at the end of 2016, how those two major events affected Rio. There is widespread agreement that hosting many smaller events provides great economic growth. Keeping hotels full and restaurants busy truly helps a city. That is why nearly every city has a convention and visitors bureau. From what I have heard, most Londoners think the 2012 Olympics were a great success and are very proud of hosting the games. I wish I could go ask the citizens of Sochi what they think. In Rio, people are already protesting the World Cup and Olympics as wasteful spending. The debate continues.

If you would like to watch two economists debate the merits of hosting the Olympics, click here.

If you would like to read The Atlantic Cities’ Stephanie Garlock’s great analysis of the Braves Stadium moving, click here. She is one of The Atlantic‘s Urban Wonks. #dreamjob

It seems that almost everyone is disappointed with October’s government shutdown. Tea Party sympathizers are unhappy that their strategy didn’t work, establishment Republicans are upset they tried that strategy at all, and Democrats and moderates are dismayed there was a shutdown in the first place. But despite the large segment of politicians who did not want a shutdown, this doesn’t mean that they could have kept the government open. The event was not just another budget dispute or political fight, but a fundamentally ideological battle between two parties. One party believes that government is fundamentally evil; this understanding complicated any compromise that might have arisen, because party members lacked the ideological drive to keep the government functioning. The other party believes that government is fundamentally good, so they refused to sacrifice parts of the government to keep the rest alive; they simply placed too much value on most government functions — especially policies as politically sensitive and ideologically significant as Obamacare.

The argument between government as good and government as bad is much deeper and more fundamental than the divides other country’s political systems suffer. But there was also a motive for the parties beyond ideology: political maneuvering. Republicans wanted to slander Obama, while Democrats wanted both to protect the president and to avoid setting the precedent of negotiating over funding the government; they view this as an obligation of the oath of office. It seems that the only way for the shutdown to end was for it to get painful — so much so that parties would be forced to break ideology and partisanship and make concessions. There were protests at the closed World War II memorial in Washington just a few days into the shutdown; these might have become widespread if the crisis was prolonged.

Negotiation: how we should make law.
Negotiation: how we should make law. Via Creative Commons

The debt ceiling was the looming pain that ended the shutdown and forced the rocky consensus. The last comparable crisis was in 2011, when President Barack Obama and House Speaker John Boehner, both moderates, tried and failed to use the situation to strike a grand bargain that would reduce the deficit. Since then, any hope of a big deal has died and been exchanged for hopes that the government will just stay open and keep funding its programs. This meant that moderates had nothing to lobby for this time. They couldn’t hope for debt reduction or even any real progress. Having come near the brink of shutdown once, some lawmakers had become desensitized to the idea. This crisis sprang from the combination of this decreased fear of jumping over the cliff and the deep political differences between the parties.

Both debacles – 2011 and 2013 – focused on the debt ceiling and on operating budgets. Between two parties that fundamentally disagree over the role of government, the debt ceiling and government funding are natural objects of contention. It is not subject but impact that distinguishes these two crises. The shutdown was partial — everything related to national security stayed open — but a government default would have caused more immediate and much larger financial chaos. A partial shutdown is not scary enough to force political surrender; a debt default is.

Saying that Congress is partisan only scratches the surface of the problem; there is a long history that builds up to the dysfunctional nature of Congress today. By examining the main political events of the Obama administration — the passage of the Affordable Care Act, the 2010 Republican mid-term election victory and the 2011 debt ceiling crisis — a picture of bitter disagreement, resolved only at the last minute, portends the story of the 2013 shutdown.  The most dynamic Congressional politics have come from the House Tea Party Caucus, a group of approximately 50 Republicans who can only be unseated by a challenger from the right due to their radicalized constituencies. Not only did Republicans retake the House in 2010; they also swept local races, giving GOP-dominated state legislatures the power to gerrymander Congressional districts and creating some markedly conservative seats.  This gerrymandering works best in big states that are majority Republican, but also contain substantial Democratic populations, such as Texas, North Carolina and Pennsylvania. Of course, Democrats did the same thing in Illinois and a few other states, but they controlled fewer state legislatures and so had less of a chance to gerrymander their way into stealing seats. In 2012, Democratic Congressional candidates received more than 1 million more votes than GOP candidates, but Republicans won 35 more seats.

It takes a fear of real-world consequences — stock market dips, credit rating downgrades, debt default, job losses — for the two sides to suppress their genuine disagreement over how to run the country.

Partisan districts do not lead to extremism on their own. Democrats have safe districts throughout the country, but only 16 House Democrats voted against the legislation meant to avert the fiscal cliff in 2011, compared to 151 Republicans who did so. This may be because the Occupy movement — in some ways the Tea Party’s equivalent on the left — never became involved in electoral politics in any comparable, concrete way. Safe left districts are still filled with moderately left voters, not far left ones. The far left is simply not as involved in the Democratic Party as the Tea Party is in the Republican Party. An active right flank has therefore pulled the House Republican Caucus to the right. Some current members, having either obtained their nomination through the Tea Party insurgency or having watched the process unseat their colleagues, feel pressure to vote on the conservative line even if they aren’t ideologically as conservative as their Tea Party counterparts. In one-party districts, the primary, not the general election, is the deciding factor. These primaries often do not receive much attention, but this doesn’t mean they don’t matter. It means that the people who can drum up the most passion and excitement — the people most upset with the status quo and invested in changing it — win nominations.

The situation has undermined Boehner’s power over his caucus. Republican aversion to legislation, especially the earmarks that have traditionally been used by political power elites to win individual legislators’ support, has given Boehner fewer tools than past Speakers have had. He has nothing that his members need for re-election; sometimes, they are even safer defying him. With so many members either fearing or caucusing actively with the Tea Party, Boehner cannot compel his party the way some past Speakers could. Because many Tea Party representatives do not have ambitions outside of their Congressional seats, being too far to the right to aim for executive positions or a Senate seat isn’t a problem. For ambitious Tea Partiers, the path to power is through increasing their seniority in the House, which means winning primaries for re-election and always keeping one foot on the party line. That makes the House a poor bet for compromise. The solutions to both recent fiscal crises have come from the Senate, where statewide elections result in more moderate members, and longer terms of office give Senators more freedom of action.  All this is to say that budget negotiations between President Obama and the House failed not because politicians in Washington cannot get along, but because real differences exist between the two factions, or least between the politicians in these factions. It takes a fear of real-world consequences — stock market dips, credit rating downgrades, debt default, job losses — for the two sides to suppress their genuine disagreement over how to run the country. After the 2011 crisis, the so-called “supercommittee” could not agree on a plan to avoid sequestration cuts because the sequester, intended to force compromise, was not painful enough.

The Tea Party wing holds as much power as it does because of the Hastert Rule — named for former Republican Speaker Dennis Hastert — which states that the Speaker should only allow votes on bills supported by the majority of the majority. Under this principle, and under normal political circumstances, Boehner should only call bills to the floor supported by most Republicans. These crises end, however, when Boehner, facing impending fiscal doom, allows bills to come to the House floor that will pass with almost all of the Democratic and a minority of Republican votes. While this course of action is often the only way to get legislation passed, the practice breeds resentment among those legislators who question why their party controls a chamber if bills can pass over their opposition. As a result, Boehner must wait until the last minute to avoid being branded a sellout by his caucus.

Theoretically, there is political room for a grand bargain, but reaching such a compromise would require a new kind of bipartisan cooperation. In the past, parties have passed combined spending bills so that each side gets something; for example, farm subsidies and food stamps were often funded in the same bill in order to garner both rural and urban support. Tellingly, House Republicans separated those two packages this summer. In the new political world, a grand bargain would be just the opposite: each party would walk away with something they forced down the opposition’s throat. Democrats would point to tax increases they made the Republicans accept. Republicans would hold up entitlement changes they imposed on Democrats. But for Republicans who wanted one prize, finding a compromise was difficult.

No single law has demonstrated the partisanship of the Obama years quite like Obamacare. To liberals, it is the biggest prize they have to show from the Obama administration, marking the conclusion of a decades-long fight for a health care overhaul. To conservatives, it is an enormous, expensive, ill-constructed law. Many Tea Partiers see their elections as mandates to repeal it. But while many districts support repeal, the 2012 election results suggest that the nation as a whole does not. After the Supreme Court declared Obamacare constitutional, the Republicans looked to 2012 as their chance to repeal it, but that never occurred. In the 2011 crisis, the dispute centered on government spending levels and the Bush tax cuts. This time, with Obamacare about to go into effect, Tea Partiers wanted to try everything in their power to achieve what they believe they were elected for: an Obamacare repeal. Beyond the polarizing effects of Obamacare, such long-term deals have many structural difficulties. The same time pressure that brings the two sides together makes crafting sophisticated legislation extremely difficult. Additionally, sacrificing their party’s most cherished goals, even if they get something from the other side, leaves representatives vulnerable in the next primary. It’s easier to just defend what the party already has until the nation’s back is up against a wall.

While the ideological chasm between the parties is large and damaging, it is reflective of a much larger shift in the U.S. party structure. Before the civil rights era, the parties were much more sectional and much less ideologically pure, though just as partisan. In 1949, southern Senators held up rent control legislation to force northern liberals to withdraw a civil rights bill they were filibustering. In 1963, the Southerners refused to pass appropriations bills in order to pressure the northern politicians into ending consideration of civil rights legislation. The idea of hijacking the government to impose a partisan agenda is nothing new. At the time, such legislation seemed as elusive as a fiscal grand bargain seems now. The system appeared broken and the opposition intractable. southern Senators, like Tea Party Congressmen, enjoyed easy re-election so long as they watched their right flank. But ultimately, the combined power of growing popular support for civil rights and remarkable individual leadership made civil rights legislation possible.

Now, instead of sections disagreeing over civil rights, we have parties disagreeing over the role of government, a divide shaped by the political history of the twentieth century. On the Democratic side, the New Deal’s counter-cyclical liberalism became the Great Society’s permanent social safety net. For the GOP, Eisenhower’s conservatism of economy became Goldwater’s conservatism of reaction. Parties can, and do, evolve ideologically in response to electoral pressure. The fact that budget negotiations have been yet again pushed back to early next year will let us see how these narratives and ideologies play out. Another 11th hour debate might harden the parties’ extremes even further, but it also might spur the popular anger and legislative courage necessary to rule more effectively. At some point, voters will force their representatives to make compromise — before the nation has a gun to its head. Until then, we can count on the debt ceiling to do that job.

The Better World by Design conference, an annual gathering of Brown and RISD students, community members, and designers of all varieties was bound to elicit critical thought on a number of subjects. What I did not expect, however, was to be critiquing the presentations and panelists I saw for their use of “poverty porn.” Poverty porn is a representational issue of larger questions related to NGOs and government’s roles in providing aid.

Poverty porn is the term increasingly used to refer to images utilized to solicit money or goods for charities and foundations attempting to help people in the developing world. These images often depict crying, starving, or generally destitute children, frequently wearing minimal clothing, in the street, or in a hospital setting. Another form of poverty porn promotion is images of people in the developed world smiling when given some new plastic object that is supposed to improve their lives. This resource use depiction is what makes poverty porn problematic; the people receiving goods lack agency in this narrative. Usage of these images portrays the developing world as a victim only salvable by Western intervention and power (or in this case, the reverse).

Fatefully, the first panel I attended was titled, “Designing New Narratives: Moving from Poverty Porn to Agency,” and featured Linda Raftree from Regarding Humanity, Victor Dzidzienyo from Howard University, and Leah Chung who performed on-site research as a RISD Maharam fellow. To varying degrees the panelists bring attention to and attempt to change poverty porn use. However, the three only brushed the surface of the poverty porn problem, and failed to go into greater depth regarding possible solutions. Nevertheless, the panel sparked my attention and informed subsequent conference analysis.

The poverty porn intervention outlook is repeated in many sectors of international aid and in societal understanding of global social issues. Ruyard Kipling first coined the phrase, “white man’s burden” in his 1899 poetic commentary on American imperialism to describe this phenomenon. The “noble enterprise” justification is still present when analyzing both governmental and non-profit development aid. Just as “Africa” and “Uganda,” or “Middle Eastern” and “Arab,” are used interchangeably, poverty porn furthers the facelessness of the native or the namelessness of the brown face.

Media depictions reinforce stereotypical images of poverty. Over and over we are bombarded with images from charities such as Smile Train, Operation Smile, or Water Aid. Big news sources often pick up and repeat such images; even if the discussion focuses on harsh depictions of the developing world, these descriptions are still promoted. Similarly, many charities and organizations boil down serious issues to percentages, ratios, and one-to-one comparisons. Often, intervention becomes an economic question: “$1 invested in water is $4 invested in the community,” or “we can feed 4 million more people without spending one more dollar.”

Resisting this narrative can be difficult. Western nations have the technology, the funds, and the mobilization power to successfully provide aid in the form of life-improvement items, livestock, or electricity and water infrastructure. Sending items that we think people need (see the TIMS video) and therefore solve their problems for them is a lack of belief that they are able to manage their own lives. Effective aid is not about what you personally think people should want or have, but rather what they think they need. Locals are much better equipped to specify the forms in which aid can be utilized most efficiently. In order to use local knowledge it is critical to engage with the communities in question rather than making executive decisions without local consultation.

The resolution for poverty porn is not for NGOs and governments to pull out of the developing world, but to reevaluate and reframe their positions there, particularly in Africa. Solutions to avoid unintentionally falling into using poverty porn images include letting people within the communities tell their own stories, as opposed to relying on outside forces to do so. A Better World by Design advocates for a thoughtful and intentional approach to design. When expanded to international development aid, this means taking a deliberately collaborative role with local communities and populations, listening to those stories. Significant change can be achieved through supporting local design and innovation, not merely exporting ideas from the developed world.

Two speakers at the Better World by Design conference merited attention for their interactions with local communities: Daniel Feldman, an Architecture for Humanity regional ambassador and Alex Eaton, co-founder of Sistema Biobolsa. These two men work intimately with communities and families to develop and build appropriate structures and develop suitable technologies. Daniel Feldman spoke of the critical role of architecture and design in Colombia for developing sustainable and usable structures while sidestepping bureaucratic zoning laws. Alex Eaton brings biodigestors (units that convert manure or other organic waste into usable energy) to small communities that are otherwise plagued by lack of electricity and public health problems. What struck me is the degree to which the men consider their role and influence as outsiders in these small communities. They ask what the role of design should be, what the role of designers should be, and how to make objects that are durable, easy to install and uninstall, adaptable, modular, and inexpensive. Feldman and Eaton work on projects that see people as more than objects, choosing instead to empower them through the utilization of local resources and spaces to enhance their quality of life, rather than depend on the next installment of international aid.

The issue we return to is the nature of the role NGOs should play in crisis or need situations. Should they take culture into account? Do NGOs function primarily as a temporary service or as transitionally helpful entities? International development aid is not black and white but incredibly nuanced. Too often Western powers attempt to deploy one-item-fits-all approaches or to universally implement bureaucratic standards. Instead, they should take the vast variety and diversity of culture and community in impoverished areas as the foundation of their approach, and tailor solutions to fit each problem. A Better World by Design Conference attempts to start this process by addressing social engagement through the framework of design. As ever, the devil is in the details. The sooner this concept is realized in international aid programs, the sooner the aid itself will no longer be needed.