Tuesday, October 16, 2012

Bogosity Alert!!!



Bogosity Alert! Bogosity Alert!

Wait till I tell you about who’s coming to sleepy good ol’ U. N. of H.:  the wares this man is peddling are inherently suspect– and that’s the reason for the alert. On the other hand, there’s a good chance he’ll blow your mind.
Bruce Bueno de Mesquita will lecture here on campus on Wednesday, October 24.  Bruce is a social scientist – a Professor at NYU and fellow at the Stanford’s Hoover Institution - that has had the temerity to venture onto the world of forecasting.  The tool of his trade is game theory – the mathematical modeling of interactions between parties to a transaction.  He has built a game theory model that enables him to analyze and predict social outcomes – and a successful consulting practice based on it.
In game theory the focus is (at the outset) on two key factors.  First, is the assumption that players are rational, that is to say recognizing that what drives our decisions is nothing more complicated than doing what is good for Number One.  The second component is to recognize that very few outcomes in our social world are entirely our own doing.  By far when it comes to the things that matter, results and outcomes depend on our actions and the actions of others.
The elementary game theory model used to explain how the elements of this framework work together is the prisoners’ dilemma.   And even if you are not familiar with the game or its formal structure – it’s more than likely that you are familiar with the logic. The prisoners’ dilemma and related game theoretic structures underscore the plot in numerous sitcoms/movies.  We find it in the now classic War Games (the one with Matthew Broderick) in which the computer ultimately discovers that the only winning move is “not to play” and in Denzel and Gene Hackman’s Crimson Tide, in which a US nuclear submarine is seemingly instructed to launch a pre-emptive strike on Russia but is unable to obtain confirmation when the radio is destroyed.  The strategic nature of the interaction emerges starkly: Denzel: “In my humble opinion, in the nuclear world, the true enemy is war itself.” But it’s not only in popular culture.  We find instances of game theory applications everywhere.  Game theory models underscore the ranking algorithms in Google searches, voting models such as the popular Intrade, matching models like those in Netflix, as well as the bidding models used in E-Bay and much more.
Why is being able to predict successfully such a big-deal in the social sciences?  Well, ‘cause successful prediction is what separates the chaff from the wheat (and there’s very little wheat, frankly), the ultimate, the triple-crown, the holy grail of it all. Cause till now we do it badly, very badly.  Does anyone remember the Berlin Wall?; the Arab Spring; and dare I bring it up, - the financial meltdown?  Did anyone see it coming?  Big fat no: here’s the latest (in what is now a veritable torrent) chest- thumping from economists and the great financial quant Emanuel Derman.
Models are abstractions – they circumscribe time and space, limit the number of actors and actions and establish cause and effect.  Events in the social sciences are visualizations of an interesting slice (to us) of a complex adaptive system. And when we intervene – be it a prediction or a policy prescription, the system will change – or not: the point being that we don’t know what will happen to it.  Put differently, often the best we can do – when it comes to model based predictions - is to make broad pattern predictions or provide explanations of the system’s underlying principles.   As a result, anyone venturing into the forecasting business – especially trying to predict anything with any meaningful detail – is more likely than not going to be wrong. 
So – here’s the mind-blowing thing, the reason for the too-good-to-be-true alerts. De Mesquita has been wildly successful at the prediction thing.  In his words, “According to a declassified CIA assessment, the predictions for which I’ve been responsible have a 90 percent accuracy rate.” His feats are documented in great detail in his book The Predictioneer’s Game (2009) where he describes in rich, intricate detail the subtle combinations of all of his model’s constituent elements and predictions in a simple, elegant manner.  In fact, probably because he has been right so many times he has become a bona fide academic superstar.  Predictioneering success has brought him his own TED TALK, appearances in many popular sites: several instances in Big Think and a guest many times in my favorite show: EconTalk with Russ Roberts. 
The cool thing about De Mesquita’s game-theoretic forecasting model is that it is portable.  Its framework is built with elements that can be replaced and substituted with the particular narrative data necessary to illustrate cause and effect, drawn from the different contexts in which the model is deployed – like a car: on the inside an automobile is internal combustion; outside, it can become a Lexus or a Hummer, red, automatic, two-seater, or blue.  De Mesquita ranges widely: he analyzes and predicts outcomes of wars, voting outcomes, regime change, corporate lawsuits and even the Israeli-Palestinian conflict. 
Generally, when someone is hyping a prediction based on some fancy model we probably only hear about the model’s successes. To appraise the performance of any model one must also know how many times the model has been wrong.  De Mesquita acknowledges –and rebuts this pitfall up front:   “Of course, it will be much more convincing if you go back over the book’s predictions, as well as forecasts I have made online in speeches, podcasts and so forth, and judge for yourself. There is always the danger that I will unwittingly focus on the best of my predictions and give less credence to those that were wrong, but I will certainly try not to do that; “and there  you have it: he is so brazen, and so good at what he does – that he is daring you to call him on it.
And we will. And I’m sure he flubs and will flub forecasts on occasion.  But you should expect that error creeps in everywhere – and Bruce will not always get it right.  Establishing how many times the model’s prediction went awry is not the only criteria we must evaluate.  And getting it wrong does not necessarily mean the model is wrong.  And herein is the task for us, the audience, when you show up at Dodds Auditorium at 12:30 pm next Wednesday, 24th.
To fully determine the quality of a model we need to know not only when the model gets it right but most importantly, whether it was right either by coincidence or because of the model’s capability.  And, similarly, when it gets it wrong, we should know whether it was either the luck of the draw or that the model flawed.  Alas, in many social science events – the counterfactual – what would have happened but for what did happen- is unknowable.  So how are we to figure this out?
The essence of modeling – as we have noted in previous blogs – is to select salient elements of a situation and set forth their interplay in a manner that conforms to understood principles of natural and social behavior.  Your task is to deploy your knowledge of understood principles of natural and – especially – social behavior to determine whether they impart sufficient discipline on the model’s inner workings for us to ascertain the power of game theory modeling.
Be prepared to be wowed – especially when he gets it right.  And even if he gets it wrong. 

arod

arodriguez@newhaven.edu


Tuesday, October 9, 2012

Public Education and the Regulatory State


Guest blogger Lesley  DeNardis is an Associate Professor of Political Science at Sacred Heart University, a member of the Hamden Board of Education, and a former UNH Professional in Residence in Economics and Public Administration.  She can be contacted at denardisl@sacredheart.edu. The issues she addresses in this contribution are to be found in greater detail in her forthcoming 2013 book: Who’s Minding the Gap?: Institutions, Ideas, and Interests Shaping Education Policy in Connecticut.  


Public Education and the Regulatory State

Let’s face it.  Education mandates are not the first words that come to mind when we raise the subject of the regulatory state.  What often springs to mind are burdensome requirements on businesses that impede economic growth.  Rolling back the regulatory state usually calls for some type of deregulation aimed at fostering growth in the private sector.  Yet, education mandates, a stealth form of taxation that fly under the radar, present just as much a threat to economic competiveness and growth for Connecticut.
Connecticut watchers already know that the Constitution state is among the most highly regulated environments for business.  The Tax Foundation ranked Connecticut 40th among states in terms of business tax climate. (www.taxfoundation.org).  What may have escaped most people’s attention is the fact that education regulations comprise the largest single source of regulation in Connecticut.  An estimated eighty percent of new mandates each legislative session are in the area of public education. These regulations passed down from the state legislature in the form of mandates to local school districts are one of the largest cost drivers for school districts and are arguably just as much a drag on the economy as business regulation.  In fact, the Connecticut Business and Industry Association, the leading business interest group in Connecticut, identified unfunded education mandates as a looming economic problem in Connecticut.  In a 2006 survey  http://www.cbia.com/newsroom/Surveys/State_Mandate_Survey.htm, the CBIA found that among the myriad problems facing municipalities, local officials identified unfunded education mandates as the most problematic.  Unfunded mandates burden local school budgets and contribute to property tax increases creating an overall unfriendly business climate in Connecticut.  Relief from education mandates should be part and parcel of any education reform package.
What should be done?
A little known law passed in 1977 requires the Connecticut state legislature to sunset laws that are deemed obsolete.  The main thrust of the law was to allow legislators to weed out obsolete regulations.  Despite the passage of this law, the CGA has only conducted sunset review once over the last thirty years.  Every five years the legislature decides to postpone sunset review.  The result – thousands of state laws are still on the books which may no longer be necessary and may be unduly burdensome.  Education policy was not included in this original legislation which has been all but ignored by the CGA.  Another route to review has been taken by our neighbors to the south and provides an example of a proactive approach.  The Governors of New York and New Jersey have empanelled commissions to conduct regulatory review of education mandates.   The work of the commission has just been completed and the recommendations are being acted upon by their respective state legislatures.
Some inroads into this problem were made a few years ago when Governor Rell commissioned a panel to study mandate relief.  The work of the commission culminated in proposed legislation “An Act Providing Relief to Municipalities” during the 2009 legislative session which would have required a two thirds vote of the legislature to approve any new mandate and a section dealing with binding arbitration reform.  Unfortunately, the bill did not pass the state legislature.  The Connecticut General Assembly should revive the commission’s recommendations and act favorably upon them.
In addition to periodic reviews of existing regulation, a similar process should be undertaken before any new mandates are proposed.  Legislators ought to ask whether the intended rule or regulation actually addresses the issue at hand.  For example, as part of the education reform package passed by the Connecticut General Assembly in 2010, the state mandated new high school graduation requirements that added 2-3 more courses for a high school student to graduate.  This mandate imposes costs on the school district in the form of hiring additional teachers.  It is of dubious value since it is based on the perception that more courses translate into better prepared or more knowledgeable high school graduates.   The measure drew sharp criticism from local superintendents of schools when they questioned the intrinsic educational merit of more courses and the attendant costs.  There were many such regulations that were part of the heralded 2010 and 2012 education reform packages.  Greater consultation and input from local officials should be solicited to determine whether the goal of a mandate might be better accomplished in another fashion.  While many of these goals may be laudable, they have imposed increased costs on school districts in order to be implemented.    Public input and consultation with local educators and officials would have found a different and perhaps more cost effective path.
Unfunded education mandates have caused school district budgets to skyrocket in recent years.  Tackling these stealth taxes will be one way to foster greater economic competitiveness.  Connecticut should embrace mandate relief a as one part of a multifaceted attempt to address its fiscal woes and become more economically competitive.   
denardisl@sacredheart.edu

Monday, September 24, 2012

Trade Matters - John Rosen


Guest Blogger John Rosen is the Executive Director of Marketing Consulting Associates based in Westport, Connecticut.  You can reach him at jrosen@mcaworks.com.

 arod

The Doha round is the ongoing series of multilateral talks sponsored by the World Trade Organization aimed at reducing or eliminating international trade barriers among nations.  It is named after the city of Doha, Quatar – where the talks commenced in 2001.  Ed. Note.


 Trade Matters - John Rosen

Recently, on September 14, we “celebrated” the fourth anniversary of the Lehman Brothers collapse, generally remembered as the critical event in the 2008 financial crisis:  the moment a “banking crisis,” along with a lackluster stock market and a slowing real economy morphed into a full-fledged “Global Financial Crisis,” a frightening Wall Street crash, and the “Great Recession.”  It may well be true that the Lehman collapse was the key, catalytic event.  As mentioned, that is the way it is solidifying in the collective memory.

Economists and policy-makers continue to debate – and will eventually write many theses and books – on the subject of the causes of the crisis that gripped world financial markets in 2008, as well as the run-up and continuing aftermath.  I wish to propose that there was one contributing factor to the crash and aftermath that is significantly overlooked:  the collapse of the Doha Round of trade negotiations on July 29, 2008.  Not the primary, certainly not the only, but just as certainly one contributing factor to the crisis.

To repeat, my thesis is that the collapse of the Doha Round is an overlooked contributing factor to the crash.  Other causes are well-known and documented – the vast, unsustainable bubble in U. S. housing prices preceding the crash, years of “too loose” monetary policy, the subprime debacle, a huge commodities price bubble and subsequent crash, moral hazard, typified by the “Greenspan put,” reinforcement of the “put” by the Bear Stearns bailout in March, 2008, a string of continuing financial institution failures and/or bailouts (Countrywide, Washington Mutual, Merrill Lynch, RBS, AIG, etc.), serious softening of the real economy, disastrous auto sales, etc.  All these, and others, surely fit the definition of “contributing factor” to the financial crisis.  Determining the interrelationships of these factors, as well their relative importance, are subjects best left to professional economists, Ph.D. candidates, finance journalists, and market participants to study for the next several years.  I simply want to be sure that one of the causes that those experts include in their analyses is one which I think is seriously overlooked:  the collapse of the Doha Round of trade negotiations in July of 2008.

So, let’s address this “overlooked contributing factor” premise in two parts:  first, that the Doha Round collapse is overlooked as a key event in that dismal year and, second, the more interesting and presumably more controversial point that the collapse was a contributing factor to the overall financial crisis.

A couple of hours of online research on this topic reveals, as expected, that there are scores of readily-available discussions, Wikipedia entries, more sophisticated and scholarly papers, and timelines regarding the financial crisis.  The timelines are particularly interesting.  Many very detailed and easy-to-access graphic versions exist.  One particularly good one (although it contains and interesting typo, see if you can find it) is from an organization known as DollarDaze.org and is attached.  You will not find any mention of the Doha Round on this otherwise extensive (and colorful) timeline.  Indeed, I could not find a single timeline that included mention of the trade negotiations and their July breakdown.  More detailed, text-only timelines are available online and none, repeat none, include the trade negotiation collapse.  One detailed text-only timeline does, indeed, have an entry for July 30, 2008:  “President signs into law the Housing and Economic Recovery Act (HERA) creating the Federal Housing Finance Agency (FHFA) to regulate Fannie and Freddie.”  Herein lies one of the key premises of this post:  while our leaders and their bureaucrats were running around creating more alphabet-soup agencies to appear to be “doing something,” they were, literally, ignoring something that really could have had a positive impact:  successful negotiation of the trade round.

The New York Times maintains an interactive crisis timeline (http://www.nytimes.com/interactive/2008/09/27/business/economy/20080927_WEEKS_TIMELINE.html), but it begins in September, with the entry, “The credit crisis, which long ago moved beyond its origins in subprime mortgages, began to accelerate in September.”  So, what was causing that acceleration?


There is, of course, a Congressional report, entitled “The Final Report of the Financial Crisis Inquiry Commission,” dated February 16, 2011.  It is 147 pages long.  It is detailed and thorough.  If you search for “Doha,” you find it mentioned not once. Not once.  (You will find 34 entries for “derivatives”).

Looking at the news reports at the time (on or around July 29, 2008), you will also find occasional mention of the failure of the negotiations, but precious little linkage to the larger, then-developing crisis in finance and the world economy.  The DJIA actually rose about 200 points on each of the ensuing two days; the FTSE and Hong Kong exchanges generally fit the same pattern.  Even the Wall Street Journal editorial page – which generally blames the Smoot-Hawley tariff of 1930 for turning a recession into the global Great Depression (blaming the tariff as well for the demise of civilization as we know it, the rise of Communism, Islamist Terrorism, video games, the Designated Hitter rule, global warming, and just about every other bad thing that has happened in the past eighty years or so) made only this tepid comment in its July 30, 2008 editorial on the topic:  “With economic growth slowing and inflation picking up, a successful Doha Round would have been a welcome growth tonic.”  That’s it.

Clearly, the first part of our premise is correct:  the failure of the Doha Round in late July, 2008 is overlooked.  Or, perhaps “overlooked” is an incorrect description; perhaps it really should be ignored, perhaps it wasn’t a contributing factor, the subject to which we now turn.

Let’s start with a return to Congress.  There is a publication from the Congressional Research Service called “World Trade Organization Negotiations:  The Doha Development Agenda,” dated December 12, 2011 (that is, more than three years after the crisis of 2008).  Here is a representative quote from that report:  “In response to the global economic crisis, the G-20 leading economies have repeatedly called for conclusion of the Doha Round as a way to bolster economic confidence and recovery.”  That’s the point:  if you research hard enough, you can find lots and lots of similar quotes calling for successful regeneration of the talks under the general heading of “In response to the global economic crisis....”  That is, after-the-fact use of the crisis to justify or rationalize calls for reinvigorating the trade talks is widespread, but there are no contemporaneous or after-the-fact analyses blaming the failure of the 2008 talks for the larger financial crisis.

This may be simply opportunism on the part of people like Pascal Lamy:  one can find quotes from him in advance of July, 2008 claiming that a successful conclusion of the talks would be critical in “…ensuring that the financial shock would not deteriorate into a far worse economic recession worldwide.”  You can also find quotes from after the September-October crash claiming that a successful trade negotiation would help restore confidence and the economy.  What I have been unable to find is anyone who has gone on record claiming that the collapse of the trade talks contributed to the crash…only scattered advance warnings and ex post claims that fixing the talks would help restore things.

Amity Shales, in her excellent book, “The Forgotten Man,” expends many pages on the 1930 Smoot-Hawley tariff as a contributing factor to turning a recession into the Great Depression.  Quoting from that book:  “Smoot-Hawley provoked retaliatory protectionist actions by nations all over the globe, depriving the United States of markets and sending the country into a deeper slump.”  Ms. Shales remains a vigorous supporter of Free Trade, but I have been unable to find any public comment from her (there may be one, I just haven’t found it) drawing the direct linkage I am making in this post:  the Doha Round collapsed on July 29, 2008, a time of heightened concern about financial markets and the economy in general.  This was followed by wholesale collapses in world financial markets beginning with the seizure of Fannie and Freddie on September 7.  In the event, Ms. Shales co-authored a WSJ op-ed with Douglas Irwin on August 2, 2008 with the basic premise of “don’t’ despair / not to worry.”

Others – Ben Bernanke in his definitive study and Liaquat Ahamed in a more readable, general audience book, for example – maintain that Smoot-Hawley was of only incidental importance to the Great Depression; that the global disaster was almost wholly caused by terrible monetary policy.  Bernanke has promised to never repeat that mistake.  Ergo, QE3.  As a general audience member who found Ahamed’s book much easier going than Bernanke’s, allow me to posit a monetarist spin on my key point that the failure of the trade talks was important:  The collapse of trade talks implies less trade.  Less trade implies not only a lower level of economic activity but a lower velocity of money.  Which implies that supporting or resurrecting a given level of economic activity will require the Fed to print more money.  Ergo, again, QE3; in this analysis as a direct consequence of the failure of the trade negotiations.

Let me conclude with a few facts and a restatement of my premise, simply to avoid any miscommunication.  First, the buildup to the crash in September-October, 2008 was years in the making and had multiple, interrelated causes.  Second, the DJIA peaked at a little over 14,000 in October, 2007, a year before the crash.  It had settled into a range of 11,000 to 13,000 for several months by the time in question, July of 2008.  This indicates that markets, at least, were betting that the worst was over…but they were watching very closely.  Our premise is that the failure of the trade talks in July contributed to a general level of uncertainty and steadily increasing inventory of bad news leading to the September crash.  As noted, the failure of the Doha Round in July did not spark a huge stock sell-off, quite the contrary.  Stocks didn’t start really crashing until September.  What did crash quite concurrently with the Doha talks was the price of commodities.  Oil prices hit their $147 peak (WTI) in that very same July and began their rapid, scary crash down to under $40 by January.  Less trade most certainly does imply less demand for oil, and commodity speculators might certainly want to head for the hills…in response to a trade talks impasse, 30-40 days before everyone else (equity speculators) realizes that the game is up.

Finally, the Doha Round of negotiations has been resurrected, but the outlook is “unclear.”  All sorts of commentators, politicians, bureaucrats, economists, etc., continue to stress the importance of a successful conclusion to this round as a means of re-establishing business confidence and generally resurrecting the world economy.  (See “In response…” above). If it’s so important, now, to complete this round, using the trade agreement to re-establish sustainable global economic growth and prosperity, shouldn’t its failure in July, 2008, be considered at least one contributing factor to the September, 2008 financial crisis?


Monday, September 17, 2012

After the Hurricane


Perhaps your attention has drifted over to football, but the aftermath of hurricane Isaac has given way to another national sport.  Isaac was the category-1 storm that just pummeled New Orleans and neighboring areas. Isaac landed close to the spot where Category-5 hurricane Katrina landed, back in 2005.  We now know that Isaac’s destruction and havoc – and especially the rainfall - was higher than what was expected.  Many were apparently unprepared for the storm’s intensity– as was evident after the fact, even though they had taken the seeming necessary recommended precautions.  Put simply, it seems many people were unpleasantly surprised by Isaac –despite having received and acted on the advance warnings provided by the authorities.

Predictably, the chorus of complaints impugning the hurricane scale rankings has surfaced.  And with the outcry one hears calls for revising the hurricane warning scales – presumably, in a direction that will convey better information.  Even the New York Times joined the fray – hosting a “debate” on the matter titled “How Could the Storm Ratings be Improved?” I argue that to replace the current system with a more complex one is inappropriate.  And I argue in fact, that to advocate replacing the scale is to call attention to the wrong issue. 

The by now familiar hurricane scale is the Saffir-Simpson scale – a 5-point category ordering originally created by Howard Safir.  The scale was later enhanced and popularized by Robert H. Simpson of the National Hurricane Center.  It was an effort at prediction in a manner intended to inform decisions both by individuals and by cities, states and regions.

Mr. Saffir’s ordering was a storm-destruction ranking system keyed primarily on wind speed.  To be considered a Category 1 hurricane, the system’s wind speed must clock within the 74-95 mph range.  At the other extreme are Category 5 monster cyclones where wind speed exceeds 155 miles per hour.  Three other thresholds between 74 and 155 mph winds separate Category 2-4 hurricanes. 

Although it might seem somewhat obvious, the beauty of Mr. Saffir’s innovation was not only in establishing the threshold (thereby creating the categories) but also in associating each category with the damage the winds may cause.  To formally do this he examined historical data.  He documented the positive relationship between wind-speed and property and other physical damage.  Thus, trees and unanchored mobile homes receive the primary damage in a Category 1 storm whereas – at the other end of the scale – a Category 5, involves the complete failure of roofs and some structures.  The other three descriptions of destruction were then matched with the sustained wind speeds that would produce the corresponding damage.  What follows then is to recognize that past is prologue. 
What Mr. Safir and Mr. Simpson created was a model, a simplification of a real event intended to convey decision-making information (material destruction) based on a particular, distinctive attribute – like wind-velocity.  By design, because it is an abstraction of reality, it will (practically) always be mistaken.  To expect a model to perfectly convey as much information as the actual event it aims to describe is a mistake.  Jorge Luis Borges most elegantly captured this fallacy in his Exactitude in Science.  As an aside, it’s worth noting that this beautiful story stands to be one of the most influential (in my opinion), if not the most influential, in the history of the written word given the number of words expended (it has less than 150 words).  Borges tells of a map-making competition where different generations of cartographers aimed to surpass the previous generation by building more and more accurate maps in the pursuit of the perfect map.  They ended the map competition by building a map that replicated the world.  A map that was useless. 

In threshold setting – which is at the core of the Safir-Simpson method and other ordered scale warning systems – there can be two types of errors.  Suppose that the authorities announce a Category 1 storm (and to keep it simple, assume there are only two levels).  When the cyclone finally arrives we find the property damage ends up being what would normally be associated with a Category 2 hurricane (for sake of argument) – an unpleasant mistake.  On the other hand, suppose the authorities announce a Category 2 hurricane – and when the winds are quieted we find material damage of Category 1 magnitude.  The error, this time, is a more welcome one.   In principle, we could move the thresholds to try to maximize the “welcome” error.  Lower the wind-speed threshold to 80 miles per hour.  Thus, any storm with winds between 75 and 79 miles per hour will be a Category 1 and now anything above 80 and 110 is a Category 2.  But moving the thresholds we encounter several unintended consequences. We would reduce the chances of incurring the unpleasant mistake and enhance the chances of the welcome mistake. But over time since everything is going to be a Category 2 hurricane – any information contained in the warning is lost – it becomes meaningless – like Borges’ map. Think about it – why not avoid unpleasant mistakes at all by eliminating the various categories and only have 1 category.  In this case you can only be pleasantly surprised. 

So what is being left out in the map-building – the Saffir-Simpson scale – and thereby a potential source of error?  There is more to a hurricane’s damage potential that high winds.  The storm surge is wall of water at the leading edge of a storm – is especially destructive in low lying areas.  The rainfall associated with storms presents flooding threats – especially in regions with saturated soils or already swollen rivers.  Two other things (at least) surge and rainfall in addition to a cyclone’s wind speed are associated with the ultimate concern material damage.  The association between these three measurable features of a cyclone and material damage – aside from being positive – is different and distinct.  Thus, relying on one aspect of a hurricane wind-speed to categorize them in a manner that tells us how much damage to expect will imperfectly capture the association between the other aspects as well – practically guaranteeing a source of error.

Will a more complex model – perhaps one that combines wind-speed, surge and rainfall - do better?  Not likely – error and therefore judgment cannot be avoided.

Thus, a model builder has to tradeoff error and accuracy.  How much error?  Alan Alda’s character in Nothing but the Truth arguing a point of law set forth the parameters: “is this mistake like wearing white after labor day or is it like invading Russia in winter?”   Scale constructors often opt to minimize errors – and associated costs of errors. And to educate the population – so that we understand the nature of the underlying decision-making model and thereby take its efficacy into account into our own decision-making and importantly, ex post performance appraisal.
Arod



Friday, September 14, 2012

Dr. Mongeon's talk at CoB Research Forum, Fall 2012


On Wednesday, September12, 2012 from 12:30-1:30pm, in Maxcy 124
The College of Business is hosting a talk by
 Dr. Kevin Mongeon
Of the Department of Economics and the Department of Sports Management
Of the University of New Haven
Kevin’s professional and scholarly work - is on the hockey version of sabermetrics.  Sabermetrics, popularized by Michael Lewis’ book: Moneyball (and the recent movie) – is all about the quantitative appraisal of baseball skills. Reliance on statistics to inform decision-making instead of the `old-school’ subjective appraisals of talent, skills and other elements of the game revolutionized baseball.  The same sabermetrics-grounded methods are exerting its influence in all other sports – hockey being an especially fertile area.  
Here are three articles on Kevin and hockey metrics:
The paper Dr. Mongeon is presenting on Wednesday is titled:
Economics and Existence of Rationally Biased Officiating”
Abstract
A stochastic decision model motivates an innovative econometric identification strategy that provides evidence that referees exhibit explicit forms of penalty calling “biases”.  However, this behavior is interpreted as rational in the context of both supplying game characteristics demanded by fans and favorable to leagues, and for self-preservation.  Penalty calling favors home teams while attempting to keep games close and balancing penalty calls between competing teams in order to promote a perception of fairness.  Short of constraining referees to act irrationally or leagues countering their own self-interest, bias mitigation through separating the officiating industry from league ownership is one policy option.
A light lunch will be served
For more information: dfraioli@newhaven.edu



Tuesday, August 28, 2012

CoB RESEARCH FORUMS SCHEDULE FALL 2012


The College of Business Research Forum
Fall 2012 Program


Wednesday, September12, 2012/12:30-1:30pm/ Maxcy 124
Dr. Kevin Mongeon
Department of Sports Management & Department of Economics

Economics and Existence of Rationally Biased Officiating”


Wednesday, September 19, 2012/12:30-1:30pm/Maxcy 124
Dr. Esin Cakan
Department of Economics

Impact of U.S. Macroeconomic News on Emerging Financial Markets”


Friday, September 28, 2012/12:30-1:30pm/Venue TBA
Dr. Catherine Lim
Department of Marketing

“The title is Advertising and R&D Expenditures, Brand Equity, and a firm's financial value”


Wednesday October 3, 2012/12:30-1:30pm/Maxcy 124
Dr. Greg Blosick
Department of Finance

“Toward and Improved Finance Pedagogy II: The Finance Bowl – An Excel Based Contest for Teaching Time Value of Money Concepts”

Page 1 of 2


The College of Business Research Forum
Fall 2012 Program


Wednesday October 17, 2012/ 1230-130pm/Maxcy 124 from 1230 to 130pm
Dr. Richard Highfield
Professor Finance

“Is there evidence that some investors are front-running analyst recommendations?”

The College of Business Issues & Solutions Speaker Series Presents
Wednesday, October 24, 2012/ Dodds Auditorium/ from 1230 to 130pm
Dr. Bruce Bueno de Mesquita
New York University
TBA
The College of Business Issues & Solutions Speaker Series Presents
Monday, October 29, 2012/ Dodds Auditorium/ from 1230 to 130pm
Ramesh Ramankutty
The Global Environmental Facility
TBA
The College of Business Issues & Solutions Speaker Series Presents
Wednesday, November 14, 2012/ Lee 301/ from 12:30 to 1:30pm
John Lonski
Moody’s Analytics
TBA


Wednesday December 5, 2012/ Maxcy 124/ from 12:30 to 1:30pm
Frank Chen
Department of Finance
           
“Systemic Risk, Financial Crisis and Credit Risk Insurance”

Page 2 of 2

For more information: dfraioli@newhaven.edu