Friday, December 26, 2008

Dangers of Formalizing and Testing Economic Theory

Dangers of Formalizing and Testing Economic Theory

Ekelund and Hébert (1990), among other points, argue that over-formalization of economics using mathematical and statistical model building without the concomitant guiding vision of a problem to solve will be detrimental to the future of economic study. Are our present day scholars of economics well-equipped with tools, but bankrupt of original economic problems to research? It’s worth examining each of these major points in detail.

Ekelund and Hébert (1990) draw several conclusions about the future of economics. First, econometrics is emphasized in graduate study and its mastery is required for advancement in the profession. Second, econometrics is so deeply prevalent that further use should be weighed against its expediency and costs. Third, proponents of the advancement of the scientific nature of economics have not fully considered whether it is possible to achieve that goal. Fourth, those who question continued formalization rightly argue that economic theory is not totally verifiable because of the ethereal nature of human behavior and the extremely high costs of collecting relevant, associated data. Fifth, economic ideas have historically led or paralleled the development of the techniques to test them, rarely vice versa, and to reverse the order of this discovery process runs the risk of dividing modern economics into two disciplines: a branch of mathematics and a branch of the social sciences. Finally, it is not the blind analysis of data in the pursuit of an elusive truth that makes economics relevant, but the healthy discourse among those of varied points of view.

Ekelund and Hébert (1990) point out that the study of mathematics and econometrics form the basis of graduate economic curricula. University hiring policies and professional advancement within the discipline all require mastery and application of mathematical and empirical technique. Because this shift can be traced to before the second half of the 20th century, the authors’ view is that it heralds the future of direction of economics as being toward mathematics and away from the social sciences.

Is graduate training a rut from which no scholar can lift himself? Whether this assertion is true can be debated. Every great economist in history went far beyond his formal training, often breaking with tradition in arduous academic journeys. Clearly those who are trained in mathematical analysis will have a natural tendency to rely on those techniques, but to suggest that they will not employ verbal or graphical descriptions of relevant economic phenomenon is far-fetched. Smith, Ricardo, Mill, Marshall, Jevons, Walras, Veblen, and Keynes all pioneered new patterns of thinking and made original contributions to economic thought. Moreover, many of them did not even have the benefit of formal training in economics (often because the discipline did not formally exist.) So if the history of economics is any predictor of the future, one cannot prove that econometric digressions in graduate training alone will forever doom the discipline.

Ekelund and Hébert (1990) assert that econometrics and mathematics have permeated modern and microeconomic and macroeconomic theory and that deeper use of econometrics in the discipline of economics should be evaluated based on weighing the costs, the advantages and disadvantages. From the vantage point of scholarship, an analysis of the advantages and disadvantages of perpetuating mathematical techniques may be difficult to ascertain. Clearly, to the degree that it is possible to assess the future usefulness of present development of econometric technique, such surveying should take place. However, the historian of economic thought has the unique perspective of being able to see past digressions in the discipline and employ 20-20 hindsight on those events. My view is that the permeation of microeconomics and macroeconomics with mathematical techniques is not a cause for alarm as long as progress in theory and policy is being made.

Advocates of continued formalization of economic theory through mathematics argue that theories must be formalized and verified if economics as a discipline will achieve the completeness and be afforded the respect of a science. Ekelund and Hébert (1990) make the salient point that while this may be a necessary or an even important goal, its pursuit seems to have taken precedence over whether it is an achievable goal. The central theme in this portion of the debate is that economists do not get the respect they really deserve. Economic theories that have not been proven do not carry the weight of gospel and are difficult to translate into economic policy.

Should economics assume the rigidity of a formal science? This is an important question with far-reaching implications. Would it not be better to view it as Ekelund and Hébert (1990, p. 604) suggest, “a powerful, though somewhat imprecise behavioral science?” Even truths we hold dear in the physical sciences are not absolute nor without founding assumptions, on which the entire discipline rests. With this in mind, economic policy must be formulated whether or not economics is regarded as a precise and formal science. Imagine the result if the political economists simply had no recommendations for improving the post-mercantilistic economies because of the difficulty in collecting data or the incomplete testing of theories. Absurd!

Opponents of continued mathematization and formalization argued that economics is essentially a social science, and as such is subordinate to the whims of human behavior and subject to the inexactness and vagaries of small data samples and incomplete theories describing economic phenomenon. Furthermore, the authors of the text indicate that the critics of econometrics further argue that data in the necessary quantities and qualities to validate the economic theories in question can be obtained only at a prohibitive cost.

While it may be true that economics is essentially a social science, all great strides of progress in the past study of economics have had both theoretical and methodological components. In other words, the development of methodology to test theory was closely associated or developed in parallel to theory. For example, it was economists such as Edgeworth, et al, who developed descriptive statistical techniques powerful enough to serve as the foundation for modern statistics and econometrics. Cournot was one of the first to recognize the importance of mathematical tools in accurately encapsulating economic ideas to avoid the digressions into vague argumentation that could occur when economic analysis was explained only in literary form. One possible danger with widespread use of econometrics is the sheer size of the data collection for analysis – economic problems that do not lend themselves to this sort of analysis may be overlooked or ignored. On the other hand, Alfred Marshall resisted mathematical expression of economic ideas because he sought to preserve the accessibility of economics from the vantage point of the common man.

Ideas that fuel economic analysis run parallel with the development of theory. The value of econometric and mathematical formalization in testing ideas is beneficial as long as limits to the final productiveness and efficacy of such techniques are understood. Econometrics has the potential to bog down economic studies in the tactical analysis of obscure ambiguities in the data collected instead of moving forward with the business of strategic problem solving. Some of the mathematical techniques developed recently have not necessarily been powerful or far reaching and so they are highly subject to revision and replacement. Ekelund and Hébert (1990) hold the view that as long the focus of economic study is on mathematics and empiricism, there is a potential in the debate to divide the discipline of economics into two separate camps; (1) economics as a branch of behaviorial science; (2) econometrics as a branch of applied mathematics.

The danger of creating a new discipline that focuses on the development of econometric techniques may be real, but that result would hardly be detrimental or without precedent. Varied schools of thought have been come and gone throughout the history of economic study. It is the very diversity of ideas which form the basis of a healthy and interesting dialogue between economists. Without a variety of ideas and great minds behind those ideas, the study of economics would be far less interesting and probably less relevant. With regard to the rich history of debate in economics, what is the harm in allowing the current fascination with econometric analysis to run its course?

Finally, over-formalization of theory in economics using mathematical or empirical technique could be in danger of rendering economics boring or irrelevant. Mathematical formalization should not be abandoned, but it should not be the driving factor in economic thought. The use of mathematics should be critically accepted with a judicious measure of skepticism. The runaway developments in econometrics may be slowing as some signs point to a slowdown in the production of articles about econometrics appearing in economic journals. It is important that those prominent in economics remember that absolute answers in economics are elusive and that mathematical techniques cannot find truth; they can only raise confidence, lower confidence or reduce ambiguity in postulated ideas. Indeed, the quest for scientific legitimacy has put the future of economics as a field of heterodox discourse in question.

Of course, one should not think that the use of mathematics is ruining the discipline of economics. Hardly. The availability of the many fine minds who are conducting research is remarkable and should have the power to tremendously improve our ability to interfere with dysfunctional aspects of the economy and leave alone those elements that are working fine. From a historical perspective, the message of the present fascination with econometrics is that it will shift and evolve into new solutions in ways that no one, not even the most astute historian of economics, can anticipate.

What is the end purpose of economic inquiry? Should not it be formulating practical answers to the questions of economic policy? This is not a new debate. In fact, the issue of mathematical formalization of economics has continually been debated since Adam Smith wrote The Wealth of Nations. It is in some sense a rather unusual turn of events though. Those who wanted to formalize economic theory have made great strides, but it was those thinkers such as Smith, Walras, Marshall, and Keynes who could think outside the box and break new ground who pushed the discipline forward. Is not it ironic that in a day and age when increased formalization of economics through computers is possible that some economists would cry, “Too much?”

Ekelund and Hébert (1990) raise several questions that are not easily answered in their conclusion to the chapter on The Development of Mathematical and Empirical Economics. Are graduate students of economics being trained improperly? Is the field of economics squandering its human capital and enamored with technique development but reticent to apply those same techniques in a cost-effective way? Do economists have low self-esteem about be treated as less than scientists? Can economics achieve the empirical validity of a true science? Can the development of econometric techniques move too far ahead of economic inquiry? Are professors and researchers of economics killing it as a discipline?

My view is that regardless of the weaknesses in some of their rhetoric, Ekelund and Hébert (1990) are essentially correct that too much formalization and verification in a field of study that must address real world, real time problems is harmful. The logical end point of the study of economics, by its very nature, is to understand economic behavior, and make practical suggestions, albeit imprecise ones, within a meaningful time frame. Otherwise, the modern economics has little practical value and could be in danger of becoming self-perpetuating, incestuous field of study.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1990). A history of economic theory and method (3rd ed.). New York: McGraw Hill.

Friday, November 7, 2008

Mastering Basic Statistics

Trust me. You can do statistical analysis. The basics of statistics can be mastered... Forget about those mind-numbing textbooks for a second. Descriptive Statistics are all about what the data looks like. Inferential Statistics are all about whether two sets of data are different or if two sets of data have a relationship.

Descriptive Statistics are a summary of what the sample data looks like, such as the measure of central tendency (e.g., mean for interval data) and measures of dispersion (e.b., standard deviation (SD) for interval data). Data that is dispersed about a mean like the bell-shape is normally distributed (i.e., 68.26%in 1 SD, 95.44% in 2 SD, 99.7% in 3 SD). The randomly drawn sample is best but rarely possible, so a non-random or convenience sample can be used with justification.

When compiling descriptive statistics, you need to know whether the sample data (i.e., level of measurement) is nominal (yes, no, or a label), ordinal (in some kind of order such as doneness of meat: rare, medium rare, medium, or well done) or a number that has order and the value means something (such as "that movie is an 8 on a scale of 10"). You also need to know the unit of analysis, such as the individual, group, organization, or society. Descriptive Statistics tell us what Inferential Statistics we can safely use to draw conclusions.

Inferential Statistics are how we make a decision about the POPULATION guided by what the Descriptive Statistics have told us about the SAMPLE data using probability theory. There are two types of decisions: Measures of Difference and Measures of Association. Measures of Difference (z, t, F, etc.) test differences between a number and sample, two samples or more than two samples. Measures of Association (r, correlation, regression) test whether variables move together and possibly whether there is some causal relationship. (Causal relationships are tricky to prove so be careful about saying X causes Y.)

When applying Inferential Statistics, the types of measures of difference or measures of association that can be used are governed by the level of measurement, the number of samples you are comparing, whether the sample is random/independent, and if the data is tightly dispersed about the mean like the normal distribution. When you are comparing samples, you have to make sure that the unit of analysis in each sample aligns with the other samples and your research question. (e.g., students in a classroom vs. a classroom of students, such as can a single student be judged by being in a particular class or should the particular class be judged by a single student.) Test statistics are calculated from sample data and critical values are looked up on a distribution (probability) table, and you compare these two in hypothesis testing. If you see a low p value, that is good.

All good quantitative research uses variations of the above instances to boil the research question down to a testable hypothesis for a large sample for descriptive, exploratory, or causal/experimental research. All good research articles explain how construct validity (i.e., theory or practical problem), external validity (i.e., how and why the sample was chosen), internal validity (i.e., why they think they saw is what they saw) and conclusion validity (i.e., how the descriptive and inferential statistics support our discussion) are achieved. A sample size of one in qualitative research might use ethnography, action research or other methods to build a case study or foundation for quantitative research.

That's it. That's about all a business manager or MBA must know about statistics. Of course, there is a lot more that you could know, but the basics can be mastered.

Monday, November 3, 2008

Broadcast News Media Research Indicated Bias for Senator Barack Obama

A broadcast news media study by the Center for Media and Public Affairs at George Mason University found that coverage during the 2008 U.S. Presidential campaign of Senator Barack Obama was 65% positive, but coverage of Senator John McCain was only 36% positive.

According to the study by researchers at George Mason University, there was a documented media bias for Obama and against McCain. Did the bias influence voters? I don't know. Was the study relevant news that was largely ignored. I don't know.

The link below has the details. Judge for yourselves...

Source: http://www.cmpa.com/media_room_press_10_30_08.htm

Additional References: Pew Charitable Trust Study of Print Media: "The media coverage of the race for president has not so much cast Barack Obama in a favorable light as it has portrayed John McCain in a substantially negative one, according to a new study of the media since the two national political conventions ended."

Source: http://journalism.org/node/13307

References

The Center for Media and Public Affairs at George Mason University, http://www.cmpa.com/media_room_press_10_30_08.htm

Pew Charitable Trusts Excellence in Journalism, http://journalism.org/node/13307

Thursday, October 23, 2008

Unwelcome Effects of Public Opinion Research

Unwelcome Effects of Public Opinion Research

The importance of the public opinion survey / poll has gained prominence in presidential races, because of the economy and efficiency of mass opinion polling over the telephone and the Internet. For example, with a relatively small sample size of just under 400 randomly selected participants one can gain a reasonable understanding of the opinions of up to 1,000,000 persons, within a margin of error. The miracle of statistical inference.

A sample of approximately 1,500 randomly drawn individuals may be projectable across the entire nation. The implications are clear. An unscrupulous candidate, who strongly desires to be elected, may communicate only those messages that increase his/her favorable ratings in the polls. On the other hand, a candidate with integrity may use the pollster to determine those messages springing from his/her political ideology that need fine tuning to appeal to the largest group of voters.

Appealing to the largest group of voters is similar in concept to the responsiveness that all politicians in a majoritarian form of democracy must face. Public opinion polling should be used only by politicians and news organizations to gain a better understanding of their audience, but polls alone should not be considered news and should not be reported in a way that will shape public opinion. Is that too much to ask? Is that unrealistic? Perhaps.

In sum, honest and disingenuous politicians alike, and news organizations with a specific agenda, may find the pollster an indispensable member of the team, but their is a societal cost.

Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Thursday, September 11, 2008

The Global Climate Change Presidency

The Global Climate Change Presidency

The next U.S. President could have significant impact on global climate change, yet Senator Obama’s policy is more about energy. Senator McCain’s policy addresses the larger scope of global climate change.

My personal research is relevant to the presidential candidates’ energy and climate change policies, because the two candidates seem to have vastly different understandings of climate change and potential underlying causes of change. (The research is on beliefs about climate change and I am looking for always looking for additional participants: http://www.geocities.com/dawagnersjca/short.html .) Advertising not withstanding, I’ve been thinking a great deal about global warming.

Some voters are not convinced that global warming is occurring. Others believe strongly that global warming is occurring. Those who believe strongly in global warming are not in agreement about the cause. Some attribute the warming to human activity (i.e., anthropogenic). Still others who believe in global warming are split among a variety natural causes, such as solar radiation due to sunspots, etc.

The Obama campaign has no clearly stated policy on global warming; There is no discernable action plan relative to what the Obama / Biden ticket will do to tackle this extremely important issue. Instead, Obama’s energy plan makes vague reference to reducing greenhouse gases.

On the other hand, the McCain campaign has published a clear statement on climate change policy, albeit brief. Like the Obama plan, the McCain plan presupposes that greenhouse gas emissions are causing global warming. Unlike Obama, McCain provides extensive detail about how a market-based cap and trade policy will encourage an overall lowering of greenhouse gases.

Double-fault: Obama. Obama’s energy policy provides easy-to-understand bullet points that are absent necessary detail, implying that the candidate and his advisors have not really done considerable thinking about how to address global climate change. Moreover, the lack of detail combined with the prominence of the term green house gas emissions (as the only cause of global warming) in the energy policy seems to indicate that no further scientific inquiry will drive the Obama plan.

Advantage: McCain. McCain’s climate policy is much more detailed and seeks scientific answers for setting acceptable levels of greenhouse gases. Presumably, a science-based approach would include a development of an extensive understanding of the degree to which greenhouse gases have played and will continue to play a role in global climate change. Both candidates’ websites offer press releases with praises of their respective policies relative to climate change, but only Senator McCain has articulated a point of departure for building a comprehensive solution to the problem of global climate change.

References

http://my.barackobama.com/page/content/newenergy

http://www.johnmccain.com/Informing/News/PressReleases/1F8B2869-689E-4E79-BFB4-C20CF1A47297.htm

Monday, September 8, 2008

Rumsey's Ten Common Statistical Mistakes

Rumsey's Ten Common Statistical Mistakes
  • Misleading Graphs
  • Biased Data
  • No Margin of Error [reported]
  • Non-random Samples
  • Missing Sample Sizes (i.e., not reported)
  • Misinterpreted Correlations
  • Confounding Variables (i.e., outside influences not discussed)
  • Botched Numbers
  • Selectively Reporting Results
  • The Almighty Anecdote
Reference

Rumsey, D. (2003). Statistics for dummies. New York: Wiley.

Monday, September 1, 2008

Rumsey's Ten Criteria for a Good Survey

Rumsey's Ten Criteria for a Good Survey

  • Target Population Well Defined
  • Sample Matches the Target Population
  • Sample is Randomly Selected
  • Sample Size is Large Enough
  • Good Follow-Up Minimizes Non-Response
  • Type of Survey Used is Appropriate
  • Questions are Well Worded
  • Survey is Properly Timed
  • Survey Personnel are Well Trained
  • Survey Answers the Original Question

Reference

Rumsey, D. (2003). Statistics for dummies. New York: Wiley.

Monday, August 11, 2008

Research on Beliefs about Global Climate Change / Al Gore

I would appreciate your help with an academic survey of beliefs about Al Gore and Global Climate Change.

It does not matter if you are a believer in global warming or a skeptic of global warming.

Here's the survey link:

http://www.geocities.com/dawagnersjca/short.html

Be sure to read the notice on the first page.

Thanks for your help. Please let me know if you have any questions.

Tuesday, July 8, 2008

Spoiler Candidacies Revisited

H. Ross Perot, a successful businessman, mounted an interesting but unsuccessful bid for the Presidency of the United States in 1992 under the banner of the Reform Party. Perot garnered 19 percent of the popular vote but failed to carry the electoral votes from any state. The influence of the Reform party is atypical among American political parties though, with most having far less popular support.

Third parties or minority parties have historically been one of four types (Janda, Berry, & Goldman, 1995): (1) Bolter parties splitting from the Democratic or Republican parties; (2) Farmer-labor parties representing working class individuals who feel they are not getting their fair share; (3) Ideological protest parties criticizing the established system; (4) Single issue parties seeking to promote an issue rather than new government philosophy.

Most American voters are loyal to either the Republican or Democratic parties, making it virtually impossible for an third, minority party candidate to win. Notwithstanding, third party movements such as Perot’s Reform party provide a means of expression for potential voters who are disenchanted with either major party. Moreover, expression of alternate platforms and agenda for government policymaking is what third-parties do best.

The Reform Party, which was essentially an ideological protest party, affected the platforms of the Democratic and Republican parties by drawing attention to important, overlooked issues such as the need for a balanced budget and ending government waste. In the future, third party platforms such as the Reform party will continue to have populist influence on the two major parties. Other important third-party challenges can be found in the Constitution, Libertarian, and Green parties.

Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Thursday, June 12, 2008

Improving Government Responsiveness to Public Opinion

In the majoritarian model of democracy, the government is made responsive to public opinion through political parties according to the model of responsible party government by addressing the following issues (Janda, Berry, & Goldman, 1995): (1) Parties should present clear and coherent programs to voters; (2) Voters should choose candidates on the basis of party programs; (3) The winning party should carry out its program in office; (4) Voters should hold the governing party responsible at the next election for executing its program.

Examining how well an American political party meets the above tests of responsible party government yields a mixed review. When candidates are running for office, programs are presented to voters in a clear fashion. However, voters often choose the candidate based on some personal characteristics instead of solely upon the programs supported by that candidate’s party. When elected to power, American political parties do tend to shape government policy according to their party platforms.

What party should be held accountable as being in power if the Democratic Party controls the House of Representatives, the Republican Party controls the Senate and a Republican President sits in the Oval Office? Some evidence suggests that many voters purposely split the ticket between the presidential, congressional and senatorial candidates, so is majoritarianism still served in such as case? No, the American model of democracy is more pluralist than majoritarian, because it does not completely meet the tests of responsible party government.

Several reforms could be enacted to bring America’s two political parties closer to responsible party government. First, at the present time, campaigns are highly-personalized to the candidates being elected and conducted outside the control of party organizations. To improve the majoritarian nature of American democracy, the connections between candidates and voters needs to be strengthened during the process of campaigns and elections. Second, party identification has weakened over the years. A strengthening of the value and importance of party membership between elections could improve the link between voters, parties, and candidates. Third, the tie between candidate and party is loosely defined at the national level and almost non-existent for the Senate and House elections. Candidates for national office could relate their positions to national party platforms more clearly. Finally, national party leadership could take a more active role in helping local candidates who are affiliated with the party get elected.

These are but a few of the ways that the responsible party model of government and the majoritarian nature of our democracy could be improved.

Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Monday, May 26, 2008

Understanding Null and Alternative Hypotheses

Is Grandma's Freezer Cold? (Understanding Null and Alternative Hypotheses)

When approaching business research, managers are sometimes confused by the concepts of the null and alternative hypotheses. The concepts are incredibly useful though, when the decision can be framed as a binary choice.

The null hypothesis embodies the condition that nothing has changed. For example, if we wished to learn if deep freezers were cold inside, we could think of the research in terms of null and alternative hypotheses.

The null hypothesis would be that the freezer in our sample is cold inside, which would be the normal condition. The alternative hypothesis would be that the freezer in our sample is not cold.

Therefore, to draw our sample, we walk up to Grandma's deep freezer, open the door, and stick our hand inside. Yes, Grandma's plugged in freezer is cold inside.

Internal validity, which means that what we saw what we thought we saw, is supported because we sensed that the freezer was cold with our own hands.

External validity is good in this case, which means that we can project our sample on the population of freezers that are plugged in (i.e., we did not check air conditioners, tap water, or ovens, but a freezer).

Construct validity, which is the theoretical background of measuring the temperature of freezers by sticking your hand in them, is supported because we have stuck our hand in all sorts of places to ascertain temperature before.

Conclusion validity, or support derived from statistically drawing a conclusion about all freezers from our sample of one, is not very good because our sample is very small.

Tuesday, April 15, 2008

American Public Opinion and Government Stability

Public opinion both shapes and is shaped by American government policy. Five characteristics of public opinion help explain how this symbiosis contributes to stability (Janda, Berry, & Goldman, 1995): (1) Public opinion about policy can change over time; (2) Public opinion defines the contours of acceptable public policy; (3) Public opinion embodies inaccurate views because citizens are willing to provide opinions to pollsters on unfamiliar subjects; (4) Government tends to respond to public opinion; (5) Government policy does not always immediately reflect public opinion. In sum, while government does not always do exactly what the population says it wants, it does listen to public opinion and adjusts policymaking efforts accordingly over time, when practicable.

Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Monday, March 3, 2008

Professional Marketing Practice: the Professional Certified Marketer (PCM) Designation

Over six years ago, the American Marketing Association (AMA) established a program for certifying marketers entitled the Professional Certified Marketer (PCM) designation. I earned this designation in 2002, but have since gone inactive, due to limitations in how much my company and the University will reimburse for professional association dues. Recently, I was contacted by the leaders of a PCM group that is forming on LinkedIn.com

The rationale behind the PCM is that individuals who have dedicated their careers to marketing and have mastered an appropriate body of knowledge deserve recognition. Moreover, the public at large should benefit from a higher level of professionalism by those who dispense marketing advice. Certification supports the notion that there is a body of knowledge that should be mastered by those who practice marketing as a career. Certification may seem pretentious to some, but it is in line with other professions that have sought to raise the threshold for those who would practice in the profession.

For MBAs who are active in marketing roles, it is particularly interesting because just as AMA members must agree to abide by a code of ethics so must PCM holders (whether they are AMA members or not). A copy of the AMA code of ethics can be found here: http://www.marketingpower.com/content435.php An AMA member who violates this code of ethics can be expelled from the Association and any PCM holder can have their certification revoked. It is important to note that the marketing profession is attempting to police unethical practices within its ranks and raise the general level of professionalism of marketers.

With regard to the PCM exam and the marketing profession, a standard principles of marketing will be helpful for exam preparation. Anyone who has an appropriate Bachelor's degree and four years of documented experience or an appropriate Master's degree and two years of documented experience may sit for the PCM exam. The PCM exam is a 5 hour long, 240 question test that covers the following subject matter, which happens to parallel much of the material that we will be discussing in a standard principles of marketing course:

1. Legal, Ethical and Professional Issues in Marketing
1.1 Comply with appropriate regulations, laws and guidelines affecting marketing
1.2 Adhere to applicable ethical codes
1.3 Engage in ongoing professional development to advance competence and practice
2. Relationship, Information and Resource Management
2.1 Set priorities, allocate organizational resources and establish information channels linking departments, disciplines, and/or branch offices regarding marketplace, consumers, and competitors
2.2 Establish and manage internal and external relationships with appropriate/relevant stakeholders to support/facilitate marketing efforts
3. Assessment and Planning of the Strategic Marketing Process
3.1 Conduct environmental analyses by identifying industry trends, analyzing competitors, assessing own organization and researching the customer in order to evaluate a marketing situation and guide strategy development/selection.
3.2 Conduct market research to collect data related to environmental scans, demand forecasts, market segmentation, new product testing, etc. to guide/support marketing strategy development/selection
3.3 Develop a market-product focus by setting marketing objectives (based on marketing and product), segmenting the market, identifying target segment(s), and positioning the product, good, or service
4. Use of the Marketing Mix
4.1 Develop strategies to introduce a new product to a market based on product characteristics, market information and corporate objectives
4.2 Identify appropriate direct marketing promotional strategies (personal selling, advertising, sales promotion, publicity, etc.) to achieve marketing goals
4.3 Develop appropriate retail/wholesale "place" strategies (channel of distribution, store location, etc.) to achieve marketing objectives.
4.4 Develop appropriate pricing strategies (actual price, sale price, MSRP, etc.) by analyzing demand, cost and profit relationships to realize pricing/profitability goals and marketing objectives.
5. Marketing Evaluation
5.1 Monitor and evaluate effectiveness of marketing process(es), programs and outcomes

I recommend the PCM designation for those who are active in marketing, because it is gaining ground as a credential that filters those who have prepared for general marketing management roles from those who have not. Moreover, the designation is not difficult to attain for those who practice marketing and have done well in a rigorous principles of marketing course. For more information on the Professional Certified Marketer (PCM) exam, visit the link below in the references.

Reference

http://www.marketingpower.com/content591.php

Sunday, February 10, 2008

Factors of Political Opinion Formation

Ideological orientation is an important factor in forming political opinions, but how do ordinary citizens, who have not developed a consistent set of political attitudes and beliefs, form opinions? Beyond ideological orientation, other factors help shape public opinion: an individual’s own self-interest, a comprehended set of political information, a series of opinion schemas, and the influence of political leadership (Janda, Berry, & Goldman, 1995).

When individuals might benefit or suffer from a particular government policy, they generally respond in terms of their own best interest, unless they feel that acting in one’s self-interest is immoral. Many citizens have no clear opinions on issues that do not affect them personally.

When individuals lack understanding on a political issue, they tend to respond with an opinion based on the latest information received, which can cause polls to fluctuate. Political information obtained through the mass media and filtered through an individual’s political socialization can produce a wide variety of opinions. However, lack of political information does not inhibit most individuals from expressing an opinion.

Various facts, images and perceptions can be mapped into what is called an opinion schema, which can be used as a proxy for a formal political ideology and serve as a guide for forming an opinion on a specific issue, while the opinion is still influenced by an overarching political ideology. These schemas are a means of understanding the images, connections, and values that people attribute to a subject.

Finally, in the absence of specific information, citizens can be swayed for or against a government policy by highly influential political leaders. Public opinion is often more often shaped by the personalities in government via the mass media than a force that actually shapes the government.

Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Friday, January 18, 2008

Did Babies Build Roads in Europe? Presumed Causation Between Correlated Variables

A frequent problem with interpretation of data in the social sciences and business research is presumed causation between correlated variables. Two variables can exhibit perfect linear correlation yet not be in a cause and effect relationship. Generally, we need to satisfy at least three stipulations to argue for a cause and effect relationship:

  1. Temporal precedence -- the cause happens before the effect.
  2. Association between the independent variable (i.e., cause) and dependent variable (i.e., effect) – a linear, geometric, exponential, logarithmic, or some other covariation exists.
  3. No reasonable alternatives -- upon careful inspection there are no other reasonable explanations for why the cause would result in the effect.

For example, between the years of 1945 and 1962, there were dramatic increases in the number of new roads built in Europe and the number of live births in the United States. (Note that I read this comparison somewhere but I do not recall; I use it frequently when teaching undergraduate statistics, because the face absurdity of the comparison makes the lesson easily remembered by students.) Were babies building roads in Europe? Not likely. Were roads in Europe making it possible for more babies to be born in the good 'ole U.S.A.? Not likely. See, variables can be perfectly correlated and probably unrelated. That is, there is no direct relationship between those variables; a confounding, third variable could be related to both of the correlated variables, which we might assign in this case to the drastic social upheaval that occurred during World War II.

In business research, causation and correlation are frequently confused as well. For example, are the dollars invested in showroom inventory the cause of sales revenue at the retail furniture store? Are the dollars of sales revenue generating investments in new showroom inventory? Still, is a third, confounding variable, such as consumer demand, somehow affecting both? In many cases, the discrete causal variable is not being measured, but at least the three stipulations above must be satisfied to argue for causation between any two known variables.

Tuesday, January 8, 2008

Enterprise Information Systems: The Problem of Integration

The time and attention of humans is required to integrate the information created, analyzed, and stored by departmental functions. Many impediments to accounting information system integration within the enterprise are easily identifiable, the chief of these being the existence of disconnected information systems that are native to individual functional areas of the organization. These native, function-based information systems are not integrated in any automated sense; instead, cross-functional information systems are integrated by the ultimate software system: people.

Of course, the entire organization must grow to survive, and business process growth inevitably requires storage and retrieval of additional information in departmental database servers, the nexus of business process growth. Such inter-departmental integration challenges are common, as managers require performance reporting that reflects a highly fluid business environment. Even the information systems within departmental functions can grow and morph to introduce intra-departmental integration challenges.

Integration can be partially achieved by integrating similar types of systems and finally the reporting output from those systems (Dunn, Cherrington, & Hollander, 2005). Information system planning can reduce the number and scope of information pockets stored in the various functional silos within the business enterprise by building systems from scratch or obtaining enterprise wide accounting systems. The key concept to understand in information system integration is to re-engineer business processes along with concomitant accounting information systems from the ground-up and avoid partial patching of information systems to achieve necessary integration. However, the low hanging fruit in accounting system re-engineering may be simply capturing and recording the same information with shorter elapsed time and fewer inaccuracies, not necessarily re-engineering the entire business process. The trade-offs seem a matter of project scope.

Reference

Dunn, C., Cherrington, J.O., & Hollander. A.S. (2005). Enterprise information systems: A patterned-based approach, 3rd edition. New York: McGraw-Hill/Irwin.