July 8, 2011 § 2 Comments
Turning on Fox, CNN or MSNBC or opening any well respected news website yesterday would have informed you not just of global affairs, but of the sorry state of Western news media. The top headlines consisted of updates on Casey Anthony, News of the World and Derek Jeter, stories that are designed to be attention grabbing and money making (or, in the case of News of the World, were about other media outlets pursuing those ends). Profits are a necessary evil for media outlets in the West, but overstating their importance shifts the balance away from informative news to pop-culture melodrama.
In the late nineteenth century, the competition over news media predominance in New York between Pulitzer’s New York World and Hearst’s New York Journal led both media outlets to increasingly turn to sensationalism and scandal-mongering in addition to their typical, more serious stories in order to drive up demand for circulation (for more historical details look here). These tactics were described as Yellow Journalism by critics of the two media outlets, a term that refers to the exaggeration of the more eye-catching, entertaining aspects of daily news for the sake of popularity or profits. The problems with this approach to media should be clear: by focusing the newspaper’s resources towards these ends, informative and research driven news was undermined and readers were presented with a biased, unrealistic portrayal of the world. The Pultizer and Hearst competition was over one hundred years ago, but viewing any and all of the most popular media sources yesterday (or today, or tomorrow) would lead one to conclude that Western media has ignored its lesson.
Dominating the headlines on July 7, 2011, alongside more useful stories concerning Libya and the US deficit debate, were the conclusion to the Casey Anthony trial and the News of the World scandal involving unethical journalistic practices. The former story likely needs no explanation; by this point most of us have been saturated with intimate details concerning Casey Anthony’s social life, the mysterious death of her child and the subsequent trial. The latter story originates with much the same type of journalism. News of the World, in an effort to get a leg up on the competition, hacked a missing girl’s cell phone in order to hear her voicemail messages, and even deleted messages when her mailbox was full in order to maintain the flow of headline-worthy details. Additionally, News of the World has been accused of bribing political officials and hacking the phones of others, including deceased military personnel and their families, all of which has led to the shutdown of the media organization by its owner, Rupert Murdoch. These stories share an unsavory link that we are all, to a certain extent, complicit in: the profitability of scandal and sensationalism driven news.
The British Prime Minister, David Cameron, gave a press conference today where he admitted he and members of his administration and of his precursor’s administration shared blame in the News of the World scandal. Additionally he called for three inquiries, one into this specific scandal, one into future regulation of the press and one into the culture of media and politics. Here are some excerpts from his speech:
“This second inquiry should look at the culture, the practices and the ethics of the British press. In particular, they should look at how our newspapers are regulated and make recommendations for the future. Of course it is vital that our press is free. That is an essential component of our democracy and our way of life. But press freedom does not mean that the press should be above the law.” … “So I believe we need a new system [of regulation of the press] entirely. It will be for the inquiry to recommend what that system should look like. But my starting presumption is that it should be truly independent … independent of the press, so the public will know that newspapers will never again be solely responsible for policing themselves. But vitally, independent of government, so the public will know that politicians are not trying to control or muzzle a press that must be free to hold politicians to account.” (The full text of this speech is available here)
Cameron is proposing that the British government will finance a not-for-profit third party regulatory institution to monitor not only all of British media, but also the media’s connections to politicians, police officers and other government officials. The Prime Minister’s move suggests that our media can no longer be trusted to adequately fill its role as a monitor of both business and government, and curiously he has turned to civil society to meet this need (non-profit, non-governmental organizations are by definition a part of civil society). The fact that the Prime Minister turned to civil society to act as a third party arbitrator of government and the press is not only indicative of the problems associated with for-profit culture, but of our growing reliance on the non-governmental, non-profit sector for regulating human affairs. This, however, is a subject for future discussion.
What was interesting for the purposes of this post was what the Prime Minister did not address in his speech. It appears that Cameron blames the News of the World scandal on a lack of regulation, but history suggests a more pervasive issue. The news stories that led to the scandal were those typical to Yellow Journalism: sensationalism, scandal-mongering (irony?), and the blurring of news and entertainment. Similarly, News of the World turned to those tactics for the same reasons that Pulitzer and Hearst did: for the sake of greater profits relative to their competitors. What was at stake for News of the World on a day-to-day, edition-to-edition basis was not the quality of its news stories or the knowledge of its reader base (it was, after all, a tabloid newspaper) but profits. News of the World did not hack the voicemail mailboxes of deceased soldiers or missing girls because the people working there felt the information gained by that violation of privacy would enlighten their readers in any manner, but because it would yield provocative and novel information that would give their newspaper a better headline than whatever The Sun was publishing that day.
Reading Cameron’s speech, one might believe that establishing a third party regulatory watchdog that is designed to occasionally say “hold on there, the items in that news story were acquired through unsavory means and corruption!” will be a realistic solution to the problem. In my opinion, this is not the case. The real problem here is the pervasiveness of profit-driven culture. Today, government officials appear more concerned with acquiring greater political clout relative to their opponents than they are with solving real problems (for example: US deficit reduction, climate change, etc.), Western media outlets appear more concerned with providing the best entertainment (for what other reason are Fox News and MSNBC so unabashedly partisan?) and so on and so forth. Replace “political clout” and “entertainment” with “personal gain” and “profits” and the two become synonymous.
Establishing a non-governmental regulatory agency to monitor news media is, at best, a bandage on an infected wound for Britain. The real issue is the ubiquitous perception that short term gains in profits or influence are more important than the long term pursuit of knowledge and resolution of real problems. Of course, profit will likely always be a necessary reality for our media, but the goal should be to tilt the balance from pure profit-driven decisions to a strategy that pursues profits without sacrificing journalistic integrity or over-selling pop culture sensationalism or ideological demagoguery. Until we in the West stop paying to be distracted by frivolous issues like the Casey Anthony trial or the intimate details of a missing person story, and until we decide that what we want from our news media is a thorough, unbiased report of the day’s most pressing issues, we will have to suffer the dysfunctional and unsavory media culture that we all have helped perpetuate.
May 19, 2011 § Leave a comment
Today President Obama gave a speech detailing his administration’s response to the Arab Spring, his support for fledgling democracies in the region, his rationale for intervening in Libya and not Bahrain, Yemen or Syria and a way forward for assisting democratic development in Tunisia and Egypt and for Arab-Israeli peace. Kori Schake writes for Foreign Policy that “he (Obama) was long on pedantry and short of concrete proposals,” and that “His national security team should have provided him a much better developed program of policies in advance of a major speech.” Schake is correct to a certain extent. When Obama discussed the government suppression of protesters in Syria, Iran and Bahrain, he was quick to criticize the repressive regimes in those countries but stopped short of offering any promise of direct support for the democratic movements there. Despite this, Obama’s speech was a success in the sense that it reaffirmed the cosmopolitan principles he advanced in his Cairo speech, that he offered significant aid to the Middle East and that he promised a slight improvement in American policy towards Arab-Israeli peace.
Obama fulfilled some of the promises in his 2009 speech in Cairo, but not all. In that speech, Obama argued:
“There must be a sustained effort to listen to each other; to learn from each other; to respect one another; and to seek common ground,” and that “recognizing our common humanity is only the beginning of our task. Words alone cannot meet the needs of our people. These needs will be met only if we act boldly in the years ahead; and if we understand that the challenges we face are shared, and our failure to meet them will hurt us all.”
In some respects today’s speech and Obama’s recent policies towards the Middle East have lived up to these promises: Obama promised to relieve $1 billion of Egypt’s debt and to provide $1 billion worth of borrowing, he asked the IMF and World Bank to create a comprehensive plan for democratic development in Tunisia and Egypt, he outlined a plan based on the post-Cold War reconstruction of Eastern Europe for the region’s economic development, he passed sanctions on Syria and Syria’s President Bashar al-Assad and he intervened in Libya with NATO (though the efficacy of this is as yet undetermined). In other respects, some of which Schake makes note of, Obama’s speech, and his policies in the region, have fallen short of these promises. For example, critics of today’s speech will note the, perhaps intentional, lack of any mention of Saudi Arabia, a country ruled by a monarchy, and it is as of yet unclear what the Obama administration will do regarding Yemen or Bahrain, or how it will react if the situation in Syria worsens.
Despite these omissions, Obama’s speech succeeded in continuing to frame US and Middle Eastern relations in a cosmopolitan light, emphasizing mutual respect, open dialogue and America’s willingness to aid and support the region. Obama began this process in Cairo in 2009, and today he reaffirmed those same aspirations and values. In today’s speech, Obama underlined his, and America’s, empathy towards the people of the Middle East. He said “failure to speak to the broader aspirations of ordinary people will only feed the suspicion that has festered for years that the United States pursues our own interests at their expense,” and “We have embraced the chance to show that America values the dignity of the street vendor in Tunisia more than the raw power of the dictator. There must be no doubt that the United States of America welcomes change that advances self-determination and opportunity.” Through these speech-acts and by stressing that ownership of the Arab Spring belonged to the Middle East, and not America, Obama broke from the “American exceptionalism” tone that was a central aspect of the Bush administration’s framing of relations towards the region. While Bush pursued unilateral intervention and policies towards the Middle East, going as far as to directly intervene in Iraq and to declare Iran “evil”, Obama has preferred to emphasis mutual respect and dignified dialogue when addressing the region.
Perhaps the most significant moment of Obama’s speech came towards the end, when he outlined his vision of a way forward for Arab-Israeli peace. Obama argued that the basis for peace should rest on working towards a two state solution, and should start with redrawing the borders as they were prior to the 1967 Six-Day War. The United States has allowed Israel much leeway in discussions of Arab-Israeli peace, and so by advocating the pre-1967 borders Obama has promised to take away a small amount from Israel in favor of the Palestinians at the bargaining table. However, this promise is not nearly enough. Israel has had America’s unyielding support for decades, and, while the Palestinians are not wholly blameless (Hamas, for example, has consistently hindered peace negotiations) Israel will continue to resist negotiations as long as it has America to rely on for unwavering military and diplomatic support. Asking Israel, who is certainly in the position of power in negotiations with Palestine, to concede more of its demands could have spurred more constructive peace talks over the coming weeks.
Obama’s speech advanced his effort to remake relations with the Middle East by promising support for the democratic movements of the Arab Spring and by reaffirming the shared values of the US and the region, but failed to suggest a sweeping overhaul of US policy in the region. Despite a few novel promises, such as redrawing the proposed borders of an Arab-Israeli compromise and the offer of US aid and relief of debt to Egypt and Tunisia, Obama’s speech, for the most part, stuck to realistic goals for the short term. By avoiding any clear promises to Syria, Bahrain and Yemen and by only incrementally increasing pressure on Israel, Obama has guaranteed the maintenance of the status quo in US policy towards the Middle East while somewhat succeeding in appeasing audiences in Egypt, Tunisia, Israel and at home.
May 13, 2011 § 1 Comment
Following the death of Bin Laden and the surge of democratic revolutions in the Middle East known as the Arab Spring, debate over the continued U.S. military presence in Afghanistan has begun in Congress. Bin Laden was not only the founder of Al Qaeda, he was also often referred to as the “spiritual leader” of the terrorist organization, and was considered an important symbolic figurehead in the effort to recruit new members to their cause. But, even before Bin Laden’s death, there was some debate over Al Qaeda’s relevance in a Middle East marked with democratic uprisings. After all, as Richard Clarke notes in a New York Times Op-ed, one of Al Qaeda’s express purposes was to replace governments in the Middle East with Islamic governments, such as Mubarak’s regime in Egypt, a goal that may now become a reality, depending on the path democratic reforms take in the region.
It is unclear whether Bin Laden’s death and the Arab Spring will truly marginalize Al Qaeda, but these events have led American Democrats to call for the beginning of troop withdrawals in the region. As reported by Josh Rogin for Foreign Policy, Democratic policymakers addressed a letter to President Obama that stated the following:
“Our nation’s economic and national security interests are not served by a policy of open-ended war in Afghanistan,” that letter stated. “A significant redeployment of U.S. troops from Afghanistan beginning in July 2011 will send a clear signal that the United States does not seek a permanent presence in Afghanistan.”
Additionally, breaking from his party platform, Senator Richard Lugar argued that “with Al Qaeda largely displaced from the country, but franchised in other locations, Afghanistan does not carry a strategic value that justifies 100,000 American troops and a $100 billion per year cost, especially given current fiscal restraints.”
However, there is significant support in Congress for the continued presence of the U.S. military in Afghanistan. Representative Howard P. McKeon, Republican of California, recently proposed a renewal of authorization to use military force against Al Qaeda to the House Armed Services Committee. Opponents of this bill argue that it would unnecessarily renew presidential war powers, thus continuing the exceptional war-time authority of the executive branch.
Underlying current debate on whether or not to continue America’s military presence in Afghanistan is a vast history of expansionist warfare. As James Sheehan states in his course “History of the International System,” (lectures available on Itunes U), wars are often begun for the sake of a policy goal relevant at the inception of conflict, but tend to persist for very different reasons. The Iraq War is an example of this: The United States invaded that country to remove Saddam Hussein and dismantle the country’s WMD potential, but the war persisted long after Hussein’s death and the discovery that WMD intelligence was false and came to be more about state-building in Iraq, and even “winning” in Iraq, than anything else.
The example that Professor Sheehan uses to make this point is World War I. Professor Sheehan notes that the original cause of conflict was the Austro-Hungarian invasion of Serbia. This invasion in turn triggered responses from the great powers in Europe: The Russians backed the Serbs, the French backed the Russians, the British backed the French and the Germans backed the Austrians. However, as Sheehan states, once conflict began “Serbia disappears from everybody’s screen almost at once.” Instead, World War I evolved into a war of attrition in both Western and Eastern Europe, and lacked a clear, overriding goal fueling the cause for war. As Sheehan notes, as the war dragged on it became more about the inability of the officials conducting the war to admit that the enormous sacrifices their country had made in human life and resources were in vain. Thus, the longer the war persisted and the greater those sacrifices became, the more important it was to “win” the war, despite the fact that the original cause for conflict, the invasion of Serbia, was no longer a relevant policy pursuit for any nation involved (except of course, Serbia).
At the onset of the Afghan War, President Bush characterized America’s purpose there as “carefully targeted actions designed to disrupt the use of Afghanistan as a terrorist base of operations and to attack the military capability of the Taliban regime.” Since 2001, however, the Afghan War has expanded to include some state-building goals, such as training the Afghan military, and the complete overthrow of the Taliban regime in favor of Karzai’s government. Now, Representative McKeon’s proposal would renew conflict in Afghanistan. While the original Congressional approval for military force authorized action against the perpetrators of 9/11, McKeon’s proposal would expand this to authorize force against Al Qaeda, the Taliban and “associated forces.”
Perhaps the most important takeaway from Professor Sheehan’s lecture is his comment that the persistence of World War I caused the rivalries and antagonisms among Europe’s great powers to harden and become more steadfast, arguably contributing to World War II. Renewing conflict in Afghanistan and allowing the War on Terror to persist, and the executive’s ability to indefinitely detain terrorist suspects to persist along with it, risks the potential of further ingraining a sense of Islamic rivalry with the West and America. The Arab Spring and Bin Laden’s death have arguably given us an opportunity for a fresh start in the Middle East. Not only will there be new regimes with which to conduct diplomacy, the Islamic world will also have less frustration with their own government and less of a sense that America is partly responsible for propping up the abysmal, yet America-friendly regimes in the Middle East. Al Qaeda’s support relies on viewing America as an enemy. If we take this opportunity to lessen our military presence in the Middle East, we may very well succeed in our original goal as of September 12th, 2001: to undermine Al Qaeda to such an extent that it becomes wholly irrelevant.
May 11, 2011 § Leave a comment
One of the better parts of last year’s health reform bill (PPACA) is a provision allowing states to opt out of many parts of the bill as long as they meet its coverage requirements. This may seem counter intuitive at first – how could the bill be strengthened by not implementing it? – but by acknowledging that there may be better ways to solve the problem of soaring health care costs while setting a floor for the care all citizens must receive, the bill allows states to experiment with alternative and potentially superior delivery systems. One of the first states moving to take advantage of this provision is Vermont, which last week passed a bill to implement a state-wide single-payer health system. The bill calls for relatively radical changes in Vermont’s health care system and, while many of its specifics remain to be fleshed out, it offers a number of promising ideas for solving our country’s health care crisis.
The Vermont plan is to combine all of the individual payers in the market into a single, unified payment system. Under this system – referred to as Green Mountain Care – individuals, small groups, federal and state governments, employers and insurers will all pay in, and the State will administer the plan. Vermont will set reimbursement rates, equalizing payments in the private and public sectors and reducing dramatically the amount of administrative paper work that must be done, but purchasers of private insurance will still be able to choose between different benefits packages.
This will be most easily accomplished with the individual and small-group markets that Vermont is given authority to regulate under PPACA, and with state and local level employees. These individuals will be able to choose between different plans on an insurance exchange, allowing them to compare different plans side-by-side. Vermont has also asked the Obama administration for a waiver for Medicaid and, in an unprecedented move, for Medicare as well. This would allow Vermont to receive the federal funds for both of these programs and funnel them through its Green Mountain Care plan.
The most difficult group to bring in to the system will be the large employers who currently provide coverage for their employees. For these employers, Vermont plans to levy an across the board tax, with the hope that employers will choose to place their employees into Green Mountain Care because they are paying part of the price regardless. This could, of course, be problematic: if some employers choose to leave the state instead of paying the increased taxes, the plan will have a negative effect on revenues, making providing care to everyone more difficult. Given significant buy-in, however, the bill contains a number of provisions to help contain costs.
Chief among these is the creation of the Green Mountain Care Board (GMCB), a much stronger version of the federal Independent Payments Advisory Board created by PPACA. The GMCB is given broad authority to implement cost containing measures including rate setting, alternative payment and delivery systems, and oversight of insurance companies (including premium increases) and hospital budgets. Importantly, providers are not beyond the scope of GMCB’s powers as they are in the federal IPAB, and GMCB would have power to negotiate drug prices, a power that many – including President Obama in his latest budget – have proposed for IPAB.
Another major source of savings will be the unified administrative system. As I noted last week, administrative overhead on the individual insurance market typically runs about 40 percent of total costs. For the small group market this is around 25 percent, and for large employers around 10 percent. In comparison, the federal government only spends around 3 percent of Medicare and Medicaid dollars on administrative costs. If Vermont is able to realize a significant fraction of these potential savings, they would be able to reduce costs across the board, reducing premiums for those still purchasing insurance on the private market, and allowing Vermont to reduce the tax it levies on employers.
There are, however, significant challenges that still remain. The bill includes no concrete financing mechanisms – a study group is currently examining different options – and this has been a frequent target of criticism. This is less problematic than it appears, though, because costs will ramp up relatively slowly, giving the legislature plenty of time to implement the study group’s recommendations. Additionally, if Vermont receives Medicare and Medicaid waivers, a significant portion of the required revenue will come directly from the federal government. Another issue will be overcoming the resistance of the pharmaceutical industry to reduced profits and maintaining access in the face of reduced provider revenues.
If Vermont is able to successfully implement this plan they will provide an example that can be followed by other states. Canada, in its path towards single-payer took a similar route: it was first implemented in a single province before spreading and evolving into a national program. Obviously, successful implementation is far from guaranteed, and there may be plans better suited to America given the system we have today. But the point is that by allowing state level innovation, PPACA provides for the kind of localized solutions that both conservatives and liberals should be able to support and which will, hopefully, end up saving the entire country money in the long run.
May 8, 2011 § Leave a comment
Since revolutions in the Middle East have persisted for months and protests in Syria have intensified in the past few weeks, there has been some speculation as to whether Iran could be the next in line to face a popular uprising of revolutionary magnitude. In February, Iran saw protesters take to the streets of Tehran as revolutions unfolded across the Middle East and days after Mubarak stepped down in Egypt. In response, the Iranian regime brutally cracked down on the demonstrators, and since then political unrest in the country has been at a low boil. Given the current state of the region and the recurrence of protests in Iran over the years, one might be tempted to expect a revolution in Iran on par with those in Egypt or Tunisia.
However, as Jack Goldstone notes in his article “Understanding the Revolutions of 2011,” there are critical components to Iran’s regime that make it resistant to significant change in the short term. In order to fully appreciate these components, it is first necessary to see how they have affected the potential for revolution in countries where popular uprisings were successful. Goldstone writes in the latest edition of Foreign Affairs:
“The revolutions of 1848 sought to overturn traditional monarchies, and those in 1989 were aimed at toppling communist governments. The revolutions of 2011 are fighting something quite different: “sultanistic” dictatorships. Although such regimes often appear unshakable, they are actually highly vulnerable, because the very strategies they use to stay in power make them brittle, not resilient.”
Goldstone clarifies this by detailing the contributing factors to the success of revolutions in Egypt and Tunisia. Common factors that lead to the fall of “paper tigers,” as Goldstone refers to “sultanistic” dictatorships, are corruption, increases in education, rises in inequality and the worsening of relations between the “sultan” and the country’s military. Indeed, these factors all played a part in the fall of Mubarak and Ben Ali: In Egypt, Mubarak’s family and close friends had amassed billions in wealth while in Tunisia Ben Ali’s family was repeatedly accused of corruption, the youth population had risen by 65% in Egypt and 50% in Tunisia since 1990 while employment opportunities had remained stagnant across the Middle East, and finally both countries’ militaries had seen there influence wane in favor of the political power of those closer to Mubarak and Ben Ali.
In contrast, Iran is insulated from many of the threats that proved to be the undoing of the “sultanistic” dictatorships of Egypt and Tunisia. As Goldstone notes, “unlike any other regime in the region, the ayatollah’s [of Iran] espouse an ideology of anti-Western Shiism and Persian Nationalism that draws considerable support from ordinary people.” This provides the Iranian regime with wider popular support than Egypt or Tunisia’s regimes saw. Furthermore, while Mubarak and Ben Ali were the single, clear targets of the revolutions in their country, power is divided among three individuals in Iran: the Supreme Leader Ali Khamenei, President Mahmoud Ahmadinejad and Parliamentary Chair Ali Larijani. Thus, while youth movements in Egypt and Tunisia had a figurehead to blame for their country’s high unemployment and rampant corruption, those in Iran do not. Lastly, Iran has two military factions that can be trusted to support the government at all costs: the Revolutionary Guard and the Basij. Given the ideological basis of both of these units, it is highly unlikely that either would rebel, and very likely that both will continue to undermine attempts at popular revolution.
Because the leadership in Iran is unlikely to be seriously threatened in the next few years (though there is some potential that conflict among the regime’s rulers could destabilize the country, see here and here), the role this country is playing in the current revolutions is of the utmost importance. Iran has been vocally supportive of the pro-democracy movements across the Middle East, and has attempted to brand the Arab Spring as a Islamic awakening inspired by Iran’s own revolution in 1979. Only in Syria, where Iran has a vested interest in maintaining the status quo, has Iran sought to undermine protesters. Despite this, Iran has attempted to highlight America’s hypocrisy resulting from the disparities in its treatment of Bahrain and Yemen compared to that of Egypt and Libya, while underplaying it’s own inconsistencies. The degree to which Iran will be successful in finding new allies in the region, and in creating new difficulties for America, is yet to be seen, but the foundation for a regionally powerful Iran is beginning to be laid.
May 6, 2011 § 1 Comment
Over the last few weeks I have weighed the advantages and disadvantages of the health care proposals contained in the budget proposals of Rep. Paul Ryan (R-WI) and President Obama. I concluded that President Obama’s health care proposal was superior, not only because it provided a more realistic target for health care spending growth, but because it provided concrete proposals to reach these targets. In contrast, the Ryan plan provides no credible mechanism to reduce health care spending, instead reducing government spending by pushing increasing costs onto Medicare and Medicaid beneficiaries.
Lost amid this debate however, is the fact that the President’s budget is, at best, a center-left plan, and that many Democrats would prefer a more robust proposal. The Progressive Caucus’ budget is such a proposal – with its more equitable mix of tax increases and spending cuts – and it seeks to restrain health care spending in many of the same ways as Obama’s budget, with one important difference: the creation of a public option.
The progressive plan envisions the creation of a government run health insurance plan which could compete with private plans on the individual market in order to hold down costs. Their proposal would utilize the health insurance exchanges created under last year’s health care reform bill to allow consumers buying insurance on the individual market to compare the benefits and cost of the government run plan side by side with private plans. This approach would not only reduce costs, it would create a route for health care delivery innovation and provide a benchmark against which private plans could be measured.
The public option reduces costs in a number of ways. First, it can drastically reduce administrative overhead. Medicare’s overheard costs are about 3% of total expenses – drastically lower than the 40% share consumed by administrative costs on the individual insurance market. While other sectors of the insurance market do better on administrative costs – the small group market spends around 25% – none approach the efficiency of the government plan. Of course, there is no guarantee that a newly created public plan will do as well as Medicare, but proposals to create a public plan by allowing those under 65 to buy into Medicare could ameliorate this concern and, in any case, reducing administrative costs to 5 or 10 percent in the individual insurance market would provide significant cost gains.
The other major route available to the government to hold down costs is the utilization of its bargaining power to negotiate lower prices for drugs and services. Medicare currently utilizes this leverage to negotiate compensation rates for physicians that are 81% of those paid by private insurance companies. These cost savings could be passed on to consumers, making the public plan more competitive . However, Medicare cannot currently negotiate for drug prices in the same way that private insurance companies can, but the Progressive budget, as well as Obama’s plan, would do away with this restriction. Presumably, any newly created public plan would also be free of this restriction.
The creation of a public plan could also result in benefits not related to cost. The Veterans Health Administration has pioneered the use of new quality measures, increased information technology usage and research-based coverage decisions among them. Medicare, too, has led the way in quality improvements – medicare patients report greater access to physicians for routine care, for example – and innovations in both programs have filtered into the private market.
By doing all of these things, the public option will act as a standard against which we can measure private insurance. By holding down costs and competing in an open market, the public option will force private insurers to follow suit. Similarly, improvements in the delivery system and in quality of care will force private insurers to provide the same level of care or else lose customers. Thus, the public option will not only help to reduce the amount of money the government spends on health care, it will reduce costs in the market as a whole. When combined with the provisions in last year’s health care reform bill, and the additional reforms in President Obama’s budget, the Progressive plan for reducing the nation’s health care burden is a credible path forward – one that would go above and beyond any other proposals in reducing costs, increasing quality, and expanding access.
May 5, 2011 § 1 Comment
Following news of the death of Osama Bin Laden after an American military raid on his compound in Abbottabad, Pakistan, former White House officials for the Bush administration were quick to praise President Obama. Former Vice President Dick Cheney, in a rare moment of appreciation for our current president, noted that he wanted “to congratulate President Obama and the members of his national security team,” while former President Bush released a statement saying he had called Obama personally to offer congratulations.
But amid congratulations for the felling of the 9/11 mastermind, some Bush officials took a moment to remind the nation that they too played a role in Bin Laden’s capture and subsequent death. Paul D. Wolfowitz, former deputy secretary of defense, noted that the succesful capture of Osama Bin Laden “also rested heavily on some of those controversial policies” of Bush’s administration. Additionally, Keep America Safe (Liz Cheney and Bill Kristol) gave credit to “the men and women of America’s intelligence services who, through their interrogation of high-value detainees, developed the information that apparently led us to bin Laden.” Those controversial policies mentioned by Wolfowitz and the interrogations credited by Keep America Safe, of course, were the indefinite detention of terrorist suspects and the use of waterboarding and other means of torture to extract intelligence.
Based on Wolfowitz and Keep America Safe’s account of the intelligence leading to the death of Bin Laden, one might believe it’s time to reopen the books on torture and break out the water barrels, rags and immobilization rack. A close look at the process through which Bin Laden was found, however, fails to suggest that torture led to any evidence that would have been otherwise unattainable.
Vital to the process of tracking down Bin Laden’s location were stories that led to the tracking of one of Bin Laden’s couriers. A New York Times article outlining the process of gathering intelligence on Bin Laden’s location noted:
“Prisoners in American custody told stories of a trusted courier. When the Americans ran the man’s pseudonym past two top-level detainees — the chief planner of the Sept. 11 attacks, Khalid Shaikh Mohammed; and Al Qaeda’s operational chief, Abu Faraj al-Libi — the men claimed never to have heard his name. That raised suspicions among interrogators that the two detainees were lying and that the courier probably was an important figure.”
The article goes on to explain that, after hearing of the courier in 2002, Bin Laden’s trail would later appear to have “gone cold” until CIA agents in the field were able to get the family name of the courier sometime after 2005. From there, more details emerged, including the courier’s full name, his license number, and eventually the compound housing Bin Laden that he frequented.
The Bush administration’s controversial tactics might appear to have made a minuscule contribution to locating Bin Laden. The most important detail that is suggested as originating from interrogations of detained terrorist suspects is the existence of the courier, but his name, location and movements were all gathered through clandestine operations in the field. Detailing this, Armando at Daily Kos writes “The first tip as to this hideout arrived six months ago and was due to “following the money.” How this connects to a “torture” success is not at all clear to me.” The Times article also only implies that the existence of the courier was found out through interrogations, it does not state directly that this piece of evidence was acquired during a waterboarding session or any other “harsh interrogation” allowed by the Bush administration. Thus, the one piece of evidence that MIGHT have come from torture cannot be directly traced to these methods, and, additionally, it would be impossible to argue that “harsh interrogations” would have been the only possible method of acquiring this intelligence if they were proven to be its origin.
Torture constitutes a breach of international law, domestic law, basic human principles and as many other judicial, religious or moralist texts that one could imagine. It has also yet to prove to be of any use whatsoever. Aside from the lack of any clear connection between “harsh interrogations” and the capture of Bin Laden or prevention of any terrorist attack, the use of these methods have also led to substantial legal complications in bringing terrorist suspects to trial, as any evidence against them acquired through torture cannot be used in court, military or civil. Hence, the “efficacy of torture post-Osama” is just what it was pre-Osama: nonexistent.