Sunday, December 31, 2006

Four Attributes of Leadership Vision

With regard to Imagining the Ideal and the concept of vision being “an ideal and unique image of the future,” there are four attributes of vision that represent dimensions for expressing the vision (Kouzes & Posner, 2002):

  • Ideality: The Pursuit of Excellence—being the very best;
  • Uniqueness: Pride in Being Different—being unique from other organizations with similar missions;
  • Future Orientation: Looking Forward—dwell on what should be, instead of what is;
  • Imagery: Pictures of the Future—paint the future of the organization in vivid, bold and imaginative colors.

Leaders bring the vision that represents the organization to life along these four dimensions.

Ideality is about pursuing a standard of excellence beyond what seems probable to what is possible. There is an important distinction between what is probable. To tailgate on an example provided Kouzes and Posner (2002), the author knows that most restaurants will fail in the first year of operation, but do restaurateurs let that known probability detract from the possibility of success? No, that is where leaders clearly differ in attitude and orientation from some managers. Perhaps it is true that managers deal in probabilities while leaders deal in possibilities. Leadership is about reaching for the possibility (or impossibility) of the ideal condition.

Uniqueness of the vision spells out how an organization is different from another. In many ways, an endeavor gains validity and the members of the endeavor gain from the vision converging on those organizations that it should converge with and diverging with those visions of organizations with which they should diverge. When this convergent and divergent validity of the vision is apparent to members, they will either want to be part of the journey or not. Either they will want to stand out in a crowd and be proud of the fact that they are doing something different or not. The author’s past company was not just another web marketing company but a company that helped other companies mine the dormant veins of gold in their websites – we did this by increasing both the quantity and quality of sales contacts generated by the website. Therefore, as it is with any company: leaders must make sure that the vision statement embodies a clear differentiation of the mission away from commoditization.

Future Orientation is an attribute of vision that is missing from many organizations. There is no doubt that many managers spend time thinking about the future. The question is whether they spend any time doing what Kouzes and Posner (2002) have identified as essential: “devoted to building a collective perspective on the future.” Visions are pictures of destination, not starting points and leaders need to spend time not only dwelling on the future possibilities but also communicating this future orientation to their organizations.

Imagery provides a visual reference point for the organization’s vision. Painting a clear mental picture of the vision can make great strides across the territory separating the constituency from the goal. Mental imagery serves as the trial run for the journey, the prototype, or the model. The imagery is not the vision just as the topographical map is not the territory. However, in order to focus on the vision, the imagery associated with the vision offers considerable utility to leaders. See the vision. Be the vision. Attain the vision.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass.

Saturday, December 30, 2006

Vision and Leading Organizations

A vision is important for followers because it charts a strategic course for the organization beyond what others thought was possible and beyond what others consider ordinary—a vision can be a source of motivation (Kouzes & Posner, 2002). A vision can provide followers something with which they can identify, believe in, and work toward completing. A vision can paint an inspiring picture, point toward a new future, and dramatize an important new direction for the company, providing a concrete representation of where the leader is trying to take the organization.

The importance of envisioning the future for leaders is that it posits what is now imaginary for the organization but that can be made real. Very little progress is made without someone imagining a better future for the organization. The author cannot help but quote a passage from the Old Testament of the Bible in this instance: “Without prophetic vision people run wild, but blessed are those who follow God's teachings.” (Proverbs 29:18, God’s Word Translation). Although this verse is referring to guidance from God, the message is the same: people need a clear, far-reaching picture of where they need to be heading, and it is incumbent upon leaders to provide that vision.

Admittedly, organizational behavior theorists suggest that to focus all collective energies, the organization must have a mission statement. Regardless of the organizational context, we often substitute words like purpose, mission, legacy, dream, goal, calling or even personal agenda, but all of those terms imply the accomplishment of some goal beyond the status quo. Although once mocked by CEOs, the term vision and the way in which a vision is communicated has to be part of the CEO’s job, with more employees, stockholders and customers expecting corporate leaders to envision and communicate where they see the company going in the future.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass.

Friday, December 29, 2006

Mill and Chadwick on Government Intervention

The social and economic policies of John Stuart Mill and Sir Edwin Chadwick addressed issues of their day, but are surprisingly relevant to current problems facing society. Mill’s ideas on income taxation are still discussed in modern politics. The U.S. Federal Income Tax system is progressive in nature, but Mill stressed equality of sacrifice, which we do not see resulting from our complicated tax code, with its myriad of deductions. The advocates of a flat tax can reach back to Mill’s writings to find support for their arguments. With inheritance taxes, Mill sought to redistribute wealth as a matter of public policy. Mills ideas on the topic of welfare reform are extremely relevant today. Mill regarded welfare dependence as an evil and proposed a self-help system based on economic incentives to encourage work and public education to improve worker skills. Chadwick applied Utilitarianism to bureaucratic practices and produced some interesting results. Chadwick’s view was that the public interest that Bentham agonized over should be defined in terms of improved economic efficiency. For example, the reduction of waste was in the public’s best interest. Chadwick made many useful observations about the utility of crime and the inefficiencies in the economics of justice, especially within the jury trial system. Chadwick noted a distinct correlation between well-compensated police officers and the resulting quality of law enforcement. Chadwick advocated the benefits of a centralized bureau for the collection of crime data. Chadwick promoted the idea of streetlights as a deterrent to crime. Chadwick’s contract management concept is an ingenious way of introducing the discipline of competition into a business enterprise that would otherwise provide services under a natural monopoly. Peel and Gladstone’s ideas about limiting the taxes that could be levied would be very interesting to discuss today. Various governmental entities now assume the right, if not the obligation, to tax every conceivable transaction. Although Mill and Chadwick agreed that government intervention was necessary to achieve optimal results, Mill disagreed with Chadwick’s idea that good could come from centralizing all political and economic authority.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1997). A history of economic theory and method (4th ed.). New York: McGraw Hill.

Thursday, December 28, 2006

Organizational Charts Are Not Organizational Structure

The organizational structure is not defined strictly by organizational charts. In other words, organizational charts do not an organizational structure make. This seemingly obvious point is sometimes forgotten. The organizational chart is a one-dimensional description of the organizational structure chiefly for purposes of being able to begin to explain the structure to third parties or internal parties who are unfamiliar with actual activities performed, how the personalities of the position occupants influence behavior, and how the organization actually strives toward completing business goals (Gibson, Ivancevich, & Donnelly, 1994). Organization charts show formal reporting relationships much like a road map show roads but none of the detailed topography or actual flow of communication traffic. An organizational chart can be deceptive because it masks the immense complexities of how the organization works to accomplish the goals for which it was designed.

The actual organizational structure is a highly complex and abstract concept. Organizations are structured around four basic organizing principles:
  • Division of labor;
  • Bases of departmentalization;
  • Department size;
  • Centralization of authority.
Decisions by the founders and executive management have determined the structure of the organization. Moreover, these decisions were/are influenced by the managers themselves, job designs, differences in individuals, job competencies, technology transfer, environmental uncertainty and overall corporate strategy. The structure of the organization and its resulting effectiveness is influenced by many factors and exerts an influence on employees, customers and the environment.

Reference

Gibson, J.L., Ivancevich, J.M., & Donnelly, J.H., Jr. (1994). Organizations: Behavior, structure, processes (8th ed.). Boston, MA: Irwin.

Wednesday, December 27, 2006

Leadership and The Hardiness Factor

The hardiness factor is the attitudinal difference between executives who were in high stress situations and who experienced a high number of illnesses vs. those who were in low stress situations but who experienced a low number of illnesses (Kouzes & Posner, 2002). The executives who appeared to be psychologically hardy exhibited these traits: (1) They were committed to the various parts of their lives; (2) They felt a sense of control over the things that happened in their lives; (3) They experienced change as a positive challenge.

Researchers found that the family is an important breeding ground for hardiness. Hardiness is important for leaders because it seems to be correlated with viewing the changes that are presented in life and the corresponding stress as normal parts of life. Stressful events are sometimes associated with changes, and often in organizational or environmental changes we find opportunities to lead. Hardiness is characterized by viewing stressful events as interesting, subject to personal influence, and as an opportunity for development. A leader needs to be psychologically hardy to enlist others and the leader needs to create an environment where constituents can accept risk and uncertainty.

Kouzes and Posner (2002) describe three ways for the organization to create a climate that develops hardiness and helps them cope more effectively: (1) Build commitment by offering more rewards than punishments; (2) Build a sense of control by choosing tasks that are challenging but within the person’s skill level; (3) Build an attitude of challenge by encouraging people to see change as full of possibilities.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass.

Podcast: http://mbafaq.podbean.com/2006/12/26/leadership-and-the-hardiness-factor/

Tuesday, December 26, 2006

Basic Research Concepts: A Handy Reference

Applied research—Using basic research methods, the outcome of the research is applied to a business problem.

Basic research—The clinical or scientific method of research. This type of research is often done in laboratories or under controlled circumstances.

Four Types of Research Methods: Reporting—A summary or incomplete review of existing data; Descriptive—Most often used in marketing or sales. This type of research always asks who, what, when, where, why, and how in research questions; Exploratory—Using focus groups or a small study to get a feel for the problem; Predictive/Causal—Research conducted where one unit is held steady, while the experiment is conducted on the other unit, which tests whether the experiment itself is the reason for change in the experimental unit.

Time Factors for Research: Longitudinal—A research study performed on a sample over a period of time; Cross-sectional—A research study done only once, which provides a snapshot of what is occurring somewhere at a particular time.

Four Types of Research Validity: Construct Validity—the theoretical or practical underpinnings of the hypotheses and measurement of the variables; External Validity—the comparability or generalizability of the findings to other samples and settings; Internal Validity—for descriptive, explanatory, or causal studies this is basically an answer to the question of whether we saw what we thought we saw; Conclusion Validity—did we use the proper statistical tools to draw these conclusions?

Operational definition—is a definition stated in terms of specific testing or measurement criteria. These terms must have empirical referents, which means we must be able to count or measure them in some way. The object to be defined can be physical one (i.e., a machine tool), or it can be abstract one (i.e., achievement motivation).

Level of Measurement—the characteristic of the data with respect to alphabetic and numerical values assigned to represent it, such as the measures of variables on surveys. Data can be represented at four levels of measurement:
  • Nominal—e.g., Male/Female; The word nominal means in name only. Nominal variables are used on surveys to describe or identify the population being sampled;
  • Ordinal—e.g., Rare, Medium Rare, Medium, Medium Well, Well Done; An ordinal measure can capture how a person feels on an issue, which is the case when the distance between each of the measure cannot be determined scientifically;
  • Interval—e.g., Temperature or 1 to 5 satisfaction scales; with Interval measures, the distance between each unit of measure has a precise distance. For example, how long does it take ten trucks on the loading dock to unload a full container of merchandise?
  • Ratio—Age in years, relative time, relative distance or relative temperature. Ratio data is captured with absolute measures; height, weight, distance, and money are all examples of ratio data.

Unit of Analysis—a classification of the individual, group, company, or societal unit under study. It is relevant because comparison of data from different units of analysis is frequently used to draw conclusions that while they seem logical are, in fact, erroneous.

For example, predicting the outcome of local elections based on a national survey or predicting the outcome of national elections based on a local survey. This fallacy involving misapplication of the unit of analysis is related to the ecological and exception fallacies. Consider this important issue in research, especially when using secondary data (i.e., data collected by somebody else for a different research question), as it is not always clear whether one is examining the individual, group, company, industry, etc.

For example, news commentators sometimes compare mismatched units of analysis and draw conclusions that may not be correct. If one draws conclusions about a group from one individual case, that is the exception fallacy. If one draws conclusions about an individual because they are part of group, that is an ecological fallacy.

For example, you know of several people who are Razorback fans and observe that they each own a red pickup truck. If you then meet a Razorback fan at the university, can you assume that they own a red pickup? No, because of the potential for ecological fallacy; you've erroneously assigned a group attribute to an individual. If you then meet red pickup trucks on the road, should you yell “Soooeee...” out the window at each one of them? No, because of the exception fallacy; it is possible that you have assigned an individual attribute to the entire group. The problem in both cases is that the unit of analysis of the information under examination does not match the type of research question at hand. Hence, there is a possibility of committing a unit of analysis fallacy.

Sunday, December 24, 2006

Alternative Dispute Resolution: Summary Jury Trial

Disclaimer: The following case background and solution are meant for educational purposes only. I am not a lawyer and this is not legal advice.

In re NLO, Inc.
United States Court of Appeals, Sixth Circuit
5 F.3d 154 (1993)

Case Background

“NLO ran a uranium­ facility. It was sued by former employees who they suffered injuries because ‘NLO had intentionally or negligently exposed them to hazardous levels of radioactive materials, increasing their risk of cancer and subjecting them to emotional distress.’ The trial court ordered that a summary jury trial be held and that it would be open to the public. NLO petitioned the appeals court to vacate the district court order to participate in the summary jury trial before the matter could be tried in regular court. (Meiners, Ringleb, and Edwards, 2000, p. 115-116).”

Summary Jury Trial

A summary jury trial is a form of mini-trial that employs a jury and is held after discovery in the event that a case is not settled before trial. Summary jury trials save time and expense for the plaintiff and the defendant, yet they are not mandatory. Summary jury trials are not technically due process per se but are a form of arbitration–that is, they are not adjudication but an effort to help the parties to case settle the dispute outside formal court proceedings at a substantial cost savings to the tax paying public. Alternative Dispute Resolution (ADR), by its very nature, requires voluntary consent to begin the process or agreement in a contract that specifies ADR will be the means of resolving any disputes. Consider a summary judgment trial between two publicly-traded companies: the confidentiality of the proceedings of a summary jury trial could be very important in terms of publicity, effect on public stock price, protecting trade secrets and the potential for biasing the pool of potential jurors, assuming that local courts would have jurisdiction over the case.

The primary reason that NLO did not want to have a summary jury trial in the case above may have been that they were not legally required to participate in one, as the judge in the trial court had so ordered, and that going to a public summary jury trial, regardless of the outcome, would substantially remove the benefits of settling the case out of court. Perhaps they did not want the limited evidence associated with information exchange made available to the public, which would not be enough to defend them in the court of public opinion and would just cause negative publicity. An open summary jury trial might provide confidential information to the public that could increase business risks and open the company up to future class action lawsuits by the community nearby the uranium processing facility. Until a dispute goes to trial, no such risk exists, and so a public summary jury trial is substantially less attractive as a means of ADR.


Reference

Meiners, R.E., Ringleb, A.H., & Edwards, F.L. (2000). The legal environment of business (7th Ed.). New York: West Legal Studies in Business.

Saturday, December 23, 2006

Harvard Business School: What MBAs Don’t Know about Sales Timing

As managers who are systematically trained in business management, MBAs frequently have little or no experience with working or managing a sales process. One of the most important issues in working a sales process is the notion of timing. Selecting the proper timing can make or break success in the stages of any sales process. Consider these brief but timeless rules of thumb for timing the sales process (McCormack, 1984):

  • Present again to a potential customer who once rejected your offer, even if you have to wait five years.
  • Use common sense about timing the sales cycle—a buyer who is unfamiliar with your product will require time and attention to initiate the sales process.
  • Listen to the buyer to understand timing cues—control the sale by structuring it around natural timing events perceived by the customer.
  • Don’t skip steps in a well-understood sales process—take your time and avoid the temptation for instant gratification.
  • Be patient and allow people to move through the steps of the sales process.
  • Be persistent in presenting your product and offer to high quality prospects instead of a large number of prospects.
  • Renew the contract when the buyer is happiest—ask for money when times are the best as possible for the client.
  • Consider multiple futures that might be better or worse relative to the deal you offer the client.
  • Impending doom or the setting of the sun are motivating factors in any sales process—there is no better time to close the deal than when the client has to make a decision.
  • Seasonal and cyclical events external to the sales process can be a motivating factor—tempus fugit (i.e., time flies) and externalities affecting the client are often timed with the calendar.
  • Best decision makers are often someone who is coming into or going out of the decision making position.
  • Claim credit for considerate timing of sales process—let the client know you are intentionally being considerate of their schedule.
  • Use inconsiderate timing judiciously—contact the client at inconvenient times if the matter is very important by letting them know why it is important.
  • Avoid giving the buyer deadlines that are later subject to revision—contrived deadlines such as special pricing can backfire if the deadline is extended.
  • Explain deadlines that seem threatening—inform the buyer of why the deadline is important to their interests.
  • Get to the point quickly in sales presentations and conversations, especially when dealing with busy buyers.
  • Convey important points early in your presentation in case you run out of time.
  • Always use less time in presentations than you ask for—ask for more than you need because brevity is the better part of valor.

Reference

McCormack, M.H. (1984). What they don’t teach you at Harvard Business School: Notes from a street-smart executive. New York: Bantam Books.

Friday, December 22, 2006

Wealth Distribution Theory

Similar to all of us, John Stuart Mill was a product of his times. Mill witnessed social upheaval and much criticism of classical economic theory. Mill was a philosopher in addition to brilliant economist. Mill sought social reforms in the humanistic vein such as greater economic equality (Ekelund & Hébert, 1997).

One way to level the playing field for all and to achieve the goal of social reform was to limit the amount of funds passed to individuals through gifts and inheritance. Mill strongly favored the proposition that individuals should be allowed to profit from their own labor, however, the over-accumulation of wealth resulted in a class of individuals who were benefiting from someone else’s labor rather than their own. Mill sought to redistribute accumulated wealth but not self-generated income.

A graduated inheritance tax is a means to redistribute wealth to the government that could have then used it to promote equality of opportunity for all individuals. Mill felt this was a way to ensure that each individual had maximum opportunity from the start.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1997). A history of economic theory and method (4th ed.). New York: McGraw Hill.

Thursday, December 21, 2006

Conditions That Foster Leadership

Many words could be used to describe conditions that foster leadership abilities. Each of the words listed in the table below were used when respondents provided words to describe their personal best leadership experiences (Kouzes and Posner, 2002). These experiences led to leadership reactions in two primary ways: (1) To be agents of change, not managers but those who brought about process innovations and new ideas, methods and solutions into use; (2) To lead through work that was assigned to them, and not necessarily through entrepreneurial or intrapreneurial efforts. Here is how respondents in the Kouzes and Posner (2002) studies described the conditions that foster leadership growth:

Challenging
Commitment
Daunting
Dedication
Demanding
Determination
Developmental
Discovering
Dynamic
Empowering
Energizing
Exciting 
Fun
Important
Inspirational
Inspiring
Intensity
Motivating
Positive
Proud
Rewarding
Spiritual
Stimulating
Strengthening
Stressful
Thrilling
Tough Work
Unique
Unusual
Uplifting
Whole-hearted


Responding to the above conditions that develop leadership resulted in those being tested becoming agents of change who effected substantive improvements to their respective organizations–new or changed processes resulted in innovative, tangible solutions. The innovation that resulted from the efforts of these agents of change was not necessarily caused by their own initiative, but the one whom the leader reported to directly typically initiated the conditions that fostered the leadership transformation. The conditions that started and guided the unfolding of the leadership challenge are surprisingly humble–just real people trying to make a difference in the organization.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass.

Wednesday, December 20, 2006

Customer Value Appraisal and Consumer Trends

Consumer Attitudes and Trends

Consumer needs substantially differ from consumer wants during the process of becoming our customers. Marketing managers try to understand consumer needs (and unmet needs) while attempting to influence consumer wants. The hope is that consumer wants will be indistinguishable from needs in the mind of the consumer (or corporate purchasing agent). For example, consumers need toothpaste and businesses need spare parts for machinery, but we try to influence them to demand our taste, color, packaging, pricing, distribution, service, etc., and associate those characteristics with our brand.

Consumer attitudes are very different from consumer behavior and attitudes often are not good predictors of behavior. Marketing managers must understand that consumer attitudes as reported in surveys can vary widely from behavioral data, what they actually buy, and how they use the product. For example, a consumer, who is registered to vote, may tell you that they will vote for Republican candidate or that they did vote for the Republican party candidates in the last election; however, there are whole host of social context issues that influence what they will actually do at the time the decision is made or what they will actually say when asked to report what they did.

Marketing managers must develop intuition about customers’ needs, wants, and behaviors that allow them to recognize trends. Of course, competitors are also trying to do so as well, so there is a need to be timely in gathering and acting upon intelligence. It is the marketing department’s job to be evangelists within the organization. Consider also that such intuition must be grounded in current knowledge about the customer; intuition alone is no substitute for solid research.

Industry and Competitive Trends

Oftentimes, marketing managers must integrate data collected for their current research needs (i.e., primary) with data collected by the company or an outside party (i.e., secondary). Knowing when to collect data directly as opposed to repurposing existing data is a critical decision making skill. It is important for marketing managers to have knowledge of the industry and competitor actions to formulate and test hypotheses about trends. For example, it is pedestrian that security is required today in the provision of customer service and safeguarding of information technology networks, but what is the next trend? Business empires are built on such knowledge.

Customer Feedback

Sources of Customer Feedback – in any organization, the marketing manager must be capable of identifying the voice of the customer and representing that voice to the entire organization. Moreover, marketing managers cannot respond to every customer issues nor direct organizational resources toward every issue, instead there is a balance to be struck between statistical significance and business risk. For example, if we ignore all little issues and never follow up with customers, we run the risk of having customers view our customer feedback process as hogwash. Still, organizations have neither the time nor resources to address every issue. It is up to the marketing function to be arbiter of depth and magnitude of the customer’s voice.

Drivers of Customer Value

Customer value is in the eyes of the beholder. The depth and magnitude of the 4P’s (Product, Price, Place, and Promotion) should be defined by the marketing manager but the customer will interpret the value of the marketing mix on their own. Never forget this reality and in doing so delude your organization into thinking that customers “will eat whatever we set out for them.” The marketing manager is always challenged to take this understanding of what customer’s perceive as valuable and let it be embodied in all marketing strategies and operational tactics. Normally, consistent delivery of real customer value drives meaningful and profitable customer relationships.

Customer Acquisition and Retention Objectives

Marketing activities can be subdivided into those that are focused on acquisition, retaining and nurturing customers. Alternatively, one could think of these activities as getting, keeping, and growing customers. Important: it is often extremely expensive to acquire customers, moderately expensive to keep customers, and extremely profitable to grow existing customer relationships. Always establish clear objectives for getting, keeping, and growing customers relative to the marketing budget. Marketing managers will be evaluated by the accounting and finance function when the marketing budget is renewed. Pardon the analogy, but just like an army, how much food, bullets, and fuel you are given is a function of killing, capturing and taking territory away from the enemy. Moreover, you need long term objectives for each of the three activities but you are often judged on short term results; don’t be discouraged by this myopia.

Segmentation Analysis and Segment Profitability

Segmenting markets can be a difficult process but it is critical; it narrowly defines how you will profitably attack the marketplace in the minds of your customers relative to the competition. You will need access to customer data to make decisions on what variables to define your customers and purchase media to reach them. Demographics (i.e., age, gender, education, and income), Psychographics (i.e., personality, preferences, social group membership, etc.) and behaviors (i.e., hunter, smoker, shopper, student, etc.) are used to segment the market. Business markets are often carved up using industry classification, company size in annual sales volume, geographic region, and markets served, etc. Sometimes customer needs overlap the above segmentation variables and can be segmented based on needs. Marketing managers are ultimately faced with choosing the most actionable segments in terms of what is doable (i.e., operationally possible) and profitable (i.e., financially sound). There will be tradeoffs between segments and the investment in time and resources required to pursue the segments. Establishing informed segmentation priorities is critical.

Reference

Kerin, R.A., Hartley, S.W., Berkowitz, E.N., & Rudelius, W. (2006). Marketing. (8th ed.). New York: McGraw-Hill.

Tuesday, December 19, 2006

Applying Learning Theory to Management Best Practices

Learning theory underpins the process theories of motivation. Most learning theory can be thought of as the process by which permanent change occurs to promote effectiveness and because of repetition (Gibson, Ivancevich, and Donnelly, 1994). This repetition can be situational or the result of formal training. Although the change that results from learning can be lasting, it may not necessarily be effective for the organizational context.

There are three types of learning that are manifested in process theories of motivation: classical conditioning, operant conditioning, and social learning. Classical conditioning involves associating a particular stimulus with a behavior that would normally be an unrelated, unconditioned response. Operant conditioning involves controlling behaviors by changing the reinforcement or punishment that follows a behavior. Social learning is the acquisition of behaviors from other employees through social interaction.

Unlike the focus on the individual's characteristics in the content theories of motivation, the process theories focus on changes in the behavior of individuals through learning. Reinforcement theory (researched by B.F. Skinner) uses operant conditioning by the workplace manager to reward positive behavior and punish negative behavior. Expectancy theory relies on manager's to determine outcomes that are important to employees and connect these to the goals of the organization -- workers estimate the value of certain outcomes. In equity theory, employees evaluate rewards based on the inputs and outputs of others. Goal-setting theory requires that individuals link the behavior patterns in steps that lead to achieving goals.

In summary, managers can influence employee motivation and managers should take into account the fact that ability, confidence, and opportunity play roles in motivation. Managers need to be sensitive to variation in employees' needs and ability, which limits what they can do this through monitoring these attributes. Furthermore, some individuals practice high degrees of self-regulation in personal motivation. When the manager serves as a good role model, he/she can motivate employees through intellectual stimulation. The manager needs to be actively involved in monitoring and promoting the motivation of employees. When employees note that their valued outcomes can be achieved by higher levels of performance, a major part of the motivation strategy has succeeded.

Each of the process theories describe dependence on workers learning from inputs they receive in the form of information and rewards and taking new actions accordingly. Presumably, managers are guiding this iterative process toward collective behavior within the organization that results in completion of organization goals. One hopes that the result is not as the text states that, in reality, many managers deal with the abundance of academic theories by choosing to ignore all of them.

Reference

Gibson, J.L., Ivancevich, J.M., & Donnelly, J.H., Jr. (1994). Organizations: Behavior, structure, processes (8th ed.). Boston, MA: Irwin.

Monday, December 18, 2006

Ricardo’s Labor Theory of Value

Ricardo’s Theory of Value was based primarily upon labor as the primary actual product cost and therefore was the most important component of a labor theory of value. Ricardo argued that every unit of labor applied toward the production of the product should be reflected in the value of the commodity, as should any reduction. According to Ekelund and Hébert (1997), this position led to many economists judging the Ricardian theory of value to be a pure labor theory of value. There were exceptions to this interpretation. Scarce or nonreproducible goods possessed value without concomitant labor. Other exceptions to the rule were the treatment of capital as essentially embodied labor. Moreover, note that Ricardo excluded rents from product costs, which carries the assumption that lands have no alternative economic use.

Through the concept of embodied labor, Ricardo further argued that capital used by production constitutes addition to the value of the product. Ricardo understood that there was no time value of money factored in for this capital. Clearly, he thought that the opportunity cost for use of this capital should be considered. Ricardian theory of value falls short of sufficiency by assuming that adjustments in wages for qualitative differences will be minor and that economic rent is excluded from costs of goods. Moreover, Ricardian theory of value confines the role of demand in determining prices to cases involving non-commodity goods where goods are produced with constant average costs of production. Ricardo erroneously argued that because of increasing population (i.e., Mathus’ Population Principle), economic growth was doomed to slow to a stationary state. His assumption did not make allowances for technological progress and its effect on production that would fuel the value equation.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1997). A history of economic theory and method (4th ed.). New York: McGraw Hill.

Sunday, December 17, 2006

Collecting a Writ of Execution

Disclaimer: The following case background and solution are meant for educational purposes only. I am not a lawyer and this is not legal advice.

New Maine National Bank v. Nemon, Supreme Judicial Course of Main 588 A.2d 1191 (1991).

Case Background

“Nemon borrowed $125,000 from New Main National Bank. He signed a promissory note that stated that, in case he did not pay, he (Nemon) would pay all costs associated with collecting this debt, including attorneys' fees. Nemon defaulted on his loan. The bank demanded that Nemon pay the balance due. Nemon did not pay, and the bank sued for breach of con­tract. The bank moved for a summary judgment against Nemon. The trial court granted the motion, stating that the bank was entitled to the balance due on the loan, accu­mulated interest, and attorneys' fees, plus $3,000 extra to cover the anticipated costs of collecting the money from Nemon. After the court entered its judgment, the bank sought and obtained a writ of execution against Nemon.

Nemon did not comply with the writ and repeatedly failed to produce documents that the court ordered him to produce concerning his debt to the bank. Nemon also repeat­edly failed to appear at scheduled court dates. The court charged Nemon with contempt and authorized a civil order of arrest but stayed the sentence so that Nemon could absolve himself of the contempt charge. Nemon failed to appear at court to absolve himself. Thereafter, the court issued an arrest warrant. The next day, Nemon paid the outstanding balance due on his judgment. Four months later, the bank moved to collect additional sums from Nemon to cover the costs of its numerous post-writ-of execution expenses. The court granted the motion. Nemon appealed" (Meiners, Ringleb, and Edwards, 2000, p. 91-92).

Writ of Execution

Collecting a monetary damage award from a defendant is the responsibility of the plaintiff. When the defendant is unable or unwilling to pay, the plaintiff can seek a writ of execution, which instructs a local official, such as the sheriff to seize and sell property to satisfy the judgment. Alternatively, the courts may order garnishment of the defendant’s property, which could involve an order for regular deductions from the defendant’s property until the judgment is satisfied (e.g., child support).

The bank was unable to collect the judgment based on the writ of execution, but there are ways to prevent this situation. In this instance, the writ of execution is an order to pay. Preventing this type of collection problem with an unsecured promissory note can be difficult, because the plaintiff had virtually no tangible leverage over Nemon, which would have helped them collect the amounts awarded. Ultimately, Nemon had to be threatened with arrest to comply with the judgment of the court. Borrowers like Nemon are exactly why banks typically do not lend money via unsecured promissory notes. Furthermore, other than the loan transaction, it seems that Nemon was not a customer of the New Maine National Bank, so there was no way to restrict any other account balances to recover the funds awarded. Either New Maine’s legal counsel did not specify property that could be seized or perhaps the sheriff did not seize bank accounts, real estate or attach Nemon’s wages, if possible. Clearly, Nemon had an account at some institution with which to pay the $125,000 plus the judgment. Alternatively, perhaps he had other assets that could be sold. Why other property was not pursued in the writ of execution is not clear. The bank could have required some collateral on the loan to use as leverage to collect the loan balance.

The Supreme Court’s rationale for awarding treble damages (i.e., $24,000 instead of $8,000) may be understood from the fact that the Superior court held Nemon in contempt and that the Supreme Court issued a unanimous (i.e., per curiam) opinion in favor of the damage award. By ignoring the repeated actions of the court, some of which were intended to assist him, Nemon behaved like a scofflaw and the Supreme Court sought to make an impression on him. The message is that if you ignore the court you will not only pay what you should but you must pay part of the damages that you have caused.

Reference

Meiners, R.E., Ringleb, A.H., & Edwards, F.L. (2000). The legal environment of business (7th Ed.). New York: West Legal Studies in Business.

Saturday, December 16, 2006

Personal Best Leadership Experiences: Challenge is the Workshop of Excellence

Kouzes and Posner (2002) reported that over half of the personal best leadership experiences were initiated by someone besides the leader. The leader’s immediate manager was the most common source of projects and it is logical to assume that individuals in the organizations or stakeholder constituencies initiated projects as well. Oftentimes, these leadership projects were initiatives for change and altered the sense of business as usual in the organization. The survey question used by Kouzes and Posner (2002) that asked who initiated the project was a great instinct, as one would naturally assume that primarily the leaders engage in entrepreneurship or intrapreneurship. The implication is that leaders must not only work with those reporting to them and their peers, but they must also follow the leadership of those to whom they report. Another important meaning of this finding is that leaders do not necessarily start projects but they lead them toward a logical and substantive finish–in effect, leadership is clearly separate from entrepreneurship.

Challenge is the Workshop of Excellence

Kouzes and Posner (2002) noted three important lessons about leader’s acting as agents of change in their personal best leadership experiences:

  • Challenges seek leaders just as leaders seek challenges – the process goes both ways.
  • Challenge is the workshop of excellence. The door to doing one’s best is opened by opportunities to challenge the status quo and introduce change.
  • New opportunities can bring forth unknown skills and abilities. Ordinary men and women can do the extraordinary with the right opportunities and support.

These are interesting findings because they are somewhat counter-intuitive. The stereotype is that leaders seek challenges much more often than challenges tend to identify them–this is not the case. As one of my mentors once put it, “Work tends to flow to those who can do it.” An orderly controlled environment is less likely to be the best environment for challenges to the status quo to thrive. Leaders can just be regular people who have been given opportunities and capabilities to succeed. Each of these findings run somewhat counter to the prevailing wisdom of how leadership and innovation interact.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass

Friday, December 15, 2006

Values Based Organizations: Smart and Ethical

In an era of business scandal, starting a corporate values programs is increasingly being seen as a strategic tool for organizations to both avoid litigation and achieve competitive advantage in the following ways:

  • Avoid fines
  • Reduce litigation
  • Retain current employees
  • Attract ethical new hires
  • Build customer loyalty
  • Maintain corporate reputation
  • Strengthen supply chain
The secret to linking corporate values to the list above is go beyond implementing corporate slogans to management providing a living example of how the values should be implemented in the corporate context. Employees rarely act more ethical than is required by their managers, so it is important for management to set the bar high. Values-based managers model ethical principles for the entire organization, and especially less-experienced employees. The organization is looking to managers to preach high values and practice what they preach. The culture in a values-based organization is a deliberate creation and not accidental.

A useful exercise for directing systematic attention to the process is for employees to brainstorm about the mission (Devero, 2002); they literally state the values that will be required to fulfill the mission in a clear and actionable manner. Values are reinforced through implementing specific policies and the filtering of the values through the organization is measured. Successful implementation is rewarded.

Structuring the values based organization is a long-term commitment with tremendous return on investment. There is an implied link between ethical business and corporate reputation, as well as a relationship between corporate reputation and recruiting the best talent, enjoying durable company goodwill and ability to raise investment capital. Ethical behavior from stem to stern of an organization is modeled by executives, financially rewarding, and the right thing to do.

Reference

Devero, A. (2002). Corporate values: Aren't just wall posters—They're strategic tools. In J.E. Richardson (Ed.), Business Ethics (Chapter 8). New York: Mc-Graw-Hill.

Thursday, December 14, 2006

Content Motivation Theories: Important to Modern Management Practices

The word motivation can be defined as the concept we use to describe the forces acting on or within an employee to initiate or direct behavior. The concept of motivation can be helpful for a manager to understand the differences in behavior intensity and behavior direction from employee to employee. Motivation is not directly witnessed or measured but only inferred from observation of its effects on employee behavior. Therefore, inferences about motivations may be incorrect, and furthermore must be corroborated before taken solely as fact.

If our inferences are correct, theories of motivation can be useful in predicting the behavior of employees. Two different categories of motivation can be applied as an aide to understanding employee behavior: (1) Content theories focus on factors within the person that catalyze, alter, or cease behavior; and (2) Process theories seek to describe and analyze how the behavior is started, redirected, intensified, or stopped. Gibson, Ivancevich, and Donnelly (1994) described four content motivation theories that are useful in inferring the connection between motivation and behavior: Maslow's hierarchy of human needs, Alderfer's ERG, Hertzberg's Two factor, and McClelland's learned needs theory.

  • In Maslow's need hierarchy the individual behavior tends toward satisfaction of basic needs first and then is intrinsically motivated toward pursuing higher order needs. The practical application of this theory is that it makes sense to managers who are trying to ascertain how motivation is manifested with respect to specific employees. A potential problem with the Maslow’s hierarchy is that it does not attempt to address the differences between individuals or the social context in which the needs are felt.
  • Alderfer's ERG postulates how individuals fail to satisfy growth needs, exhibit frustration and focus on lower order needs. A practical application of this theory is that it calls attention to the result of when any need is not satisfied--managers can relate to the fact that frustration can be a major inhibitor to peak performance.
  • In Hertzberg's Two factor theory, the assumption is that not all elements of the job motivate employees, but that motivational elements can be identified, developed and fine-tuned. Managers can relate to the practical nature of this theory. A clear shortcoming of the theory is that it assumes that all employees respond to the altered motivational elements in like fashion.
  • In McClellan's learned needs theory, a person's needs are learned from culture and education. Therefore, the strength of these needs can be enhanced by education. An application of this principle is that needs that are compatible with the organization can be strengthened.


Given that each of the above theories assumes that observation and action will produce results, perhaps the greatest practical application of motivation theory is that it serves the purpose of getting managers to think constructively about what activities and circumstances motivate employees who possess certain types of backgrounds. Motivating employees is a high leverage managerial activity that tends to produce more efficient and effective operations in terms of quality and customer satisfaction.

Reference

Gibson, J.L., Ivancevich, J.M., & Donnelly, J.H., Jr. (1994). Organizations: Behavior, structure, processes (8th ed.). Boston, MA: Irwin.

Wednesday, December 13, 2006

American Mass Media Bias

News reporting is exception reporting. What is different from what happened yesterday or from what is understood to be the status quo is frequently reported as news. MBAs and managers responsible for media and public relations should be cognizant of these natural biases when launching publicity campaigns and handling public relations crises. Consider how business news is reported and synthesized on the broad canvas of political news reporting.

Strong Liberal Bias in American Media

Perhaps you have heard the charge that American mass media exhibit a strong liberal bias. Whether you believe in liberal media prejudice or not, the significant question is whether the editorial view of the media should necessarily parallel the ideology of the American people. The mass media is comprised of working journalists and editorial staff, many of whom serving as gatekeepers determining what news stories get coverage and how they are covered. These groups tend to hold one another in check to some degree but because working journalists often have liberal ideological views, the essential nature of the reporting will be to promote equality over order. Thus, the ideology of the American people, who tend to be more conservative, is often not reflected in the reporting. It is not necessary for the American mass media to reflect precisely the views of the American people, but the mass media has a responsibility to report the news in a balanced fashion. For example, when assailing a conservative ideologue for signing a lucrative book deal, the mass media should also assail a liberal ideologue for the same, etc.

The charge that practicing journalists tend to hold liberal ideological viewpoints is true, but does this alone make the reporting by the mass media reflect a strong liberal bias? Editorial staff members tend to hold conservative ideological viewpoints. Research indicates that mass media outlets are more critical of incumbents and less critical of challengers (Janda, Berry, & Goldman, 1995). Therefore, yes, when covering news stories relating to a relatively conservative national administration the media is liberal. Moreover, when covering a liberal national administration, the media is probably perceived as being more conservative. Hence, Janda, Berry, and Goldman (1995) report that virtually no long-term ideological or partisan bias exists in media coverage. A reasonable conclusion is that the very nature of news reporting encourages the journalist to seek stories that do not conform to the status quo. By its very nature, news is different; it is exception reporting. This phenomenon might explain why during some years blatant liberal treatment of specific news stories to appeal to the audience.

Distortion or Underreporting of Global News

There exists a distinct possibility that American journalists severely distort or underreport global and international news; delivering eyes and ears to advertisers may be an underlying cause. To answer the charge that the American mass media skew or ignore international news, one must look to the private ownership of media outlets. Print and broadcast media are privately owned and therefore must generate a profit to survive in the long-term. Media outlets are dependent upon advertising revenues, not government subsidies, to cover expenses and generate a profit. Advertising rates are tied to the audience size. If one media outlet gears its news coverage for mass audience appeal, then other outlets must also meet this threat or risk losing their audience. Thus, to a large degree, the desire of the audience will dictate the content provided. The problem may not be with distortion or underreporting of international news, but the shear lack of demand for international content.

The balance of coverage of international news and events could be improved as follows:

  1. Feature an international news segment;
  2. Feature a lead international story in each broadcast at the national and local level;
  3. The US FCC could regulate that approximately 20% of news coverage should focus on events outside the country;
  4. The gatekeepers in media outlets could voluntarily agree to increase the amount of international news and documentary coverage;
  5. Newspapers and Internet/Web sources of news could increase their coverage of international stories without cooperation from the traditional print media, and
  6. Refer to terrorists and terrorist organizations as anonymous perpetrators of the events to avoid developing terror brands and feeding the terrorists' psychopathic egos.
Reference

Janda, K., Berry, J.M., & Goldman, J. (1995) The challenge of democracy: Government in America, (4th Ed.). Boston, MA: Houghton Mifflin.

Tuesday, December 12, 2006

Harvard Business School: What MBAs Don’t Know about Problems in Selling

While most people are good at selling themselves in friendships and earning good grades in college, when it comes time to sell products and services, we forget the basics about exercising this important skill (McCormack, 1984). Every day we sell our families’ on our roles as Dad or Mom, our employers on our work skills, or our work in volunteer organizations. However, when it comes to employing these natural powers of negotiation and persuasion in a purposeful way, we start to freeze up. Why? Our perceptions of what it means to sell are out of balance.

Selling Is An Important Skill

Some managers may think of selling as crass or unimportant, but it is an important skill that every senior executive has mastered. Management training that can be found in business schools is helpful but it is not a replacement for quota-carrying sales experience. Leadership and managerial training are important but no replacement for mastering the art of persuasion. Let’s face it; business schools can’t and don’t teach selling skills, but these skills can be essential for climbing the corporate ladder from middle management to the executive suite (McCormack, 1984). Listen up, newly minted MBAs…

Selling Can Be Seen as Intrusive

There is a social stigma attached to professional selling that is undeserved. In general, there is a temptation to go along with others, especially our peers or those we admire. Those who are selling and those who are being sold to can perceive selling as intrusive. An awareness of the intrusiveness of selling can be an asset to the MBA in the right context (McCormack, 1984). Instead of considering your sales efforts to be intrusive and resist practicing them, use your understanding of sales-related intrusiveness to reach buyers without turning them off to your product. Do not use intrusive sales techniques, but be aware that any sales techniques might be perceived as being intrusive in the wrong context. Sensitivity to the buyer’s situation, coupled with patience, can provide effective support to the sales process.

Overcoming Your Fear of Selling

Fear of rejection and fear of failure can stop your sales efforts cold. The MBA must know deep down and on the surface that rejection is not a problem but an opportunity. Learn to love the word NO in all its glory! Salesmanship starts when the customer says NO. It is at that point when we truly discover the scope of the customer’s knowledge about the product and the resistance to buying the product. Moreover, the fear of failure is really cleverly-disguised fear of being a pioneer; if you are not making mistakes and failing, you are not trying hard enough to succeed. Overcoming the twin fears of rejection and failure is the hallmark of sales winners. In all my years of business, I have found only one difference between winners and losers (in all areas) over the long haul: winners thought they were winners and losers thought they were losers, with each becoming what they had allowed their fears to help them envision.

Reference

McCormack, M.H. (1984). What they don’t teach you at Harvard Business School: Notes from a street-smart executive. New York: Bantam Books.

Monday, December 11, 2006

Business Research Push-ups

Ascribing Causes to Events

The post hoc fallacy generally describes a major problem with reaching and inductive conclusion that covariation between two variables indeed exists when the variables cannot be manipulated (to verify the conclusion.) Causal inferences are also only predictive and presumptive, not absolute. Therefore, conclusions based on causal inferences may only be temporary in nature -- changes in the conclusion will come if strong predictions of other causal inferences are found. In other words, large problems may result when other variables (not under study) may be responsible for actions we have ascribed to one particular variable.

Major Sources of Measurement Errors

The four major sources of measurement error are the respondent, various situational factors, undue influence by the measurer, and the instrument used to measure the response. The respondent may be a source of measurement error in a home telephone interview conducted in the evenings on the topic of political candidates when s/he gives imprecise answers just to end the interview and get off the telephone. An additional person such as a spouse present during an interview on the topic of attitudes toward a new car model may influence the responses given. An interviewer in a face-to-face interview may inadvertently lead the respondent to give certain answers by using a particular tone of voice. A long survey containing many questions worded at a higher level of vocabulary than the respondent is accustomed to will cut response rates.

Measurement Scale Validity More Important than Reliability

Scale validity is more important to the measurement process than reliability. Reliability is concerned with freedom from random error or instability that can be present in a measurement device. A measurement device can be reliable, but not it does not have to be valid. Validity is more critical than reliability because it refers to whether what we wish to measure is actually measured. However, a measurement device cannot be valid if it is not reliable.

Difficulty of Determining Content Validity

Content validity of the measurement scale items is not the most difficult type of validity to determine. The evaluation of content validity requires judgment and intuition which may be difficult for some to accomplish. Once again, the criterion-related validity is not simple, but it can be determined by correlation of the scores. Construct validity is the most difficult to determine because one must consider whether the construct applies to (i.e., supports or refutes) the theory in question.

Reliable Measures May Not Be Valid

A valid measurement is reliable, but a reliable measurement may not be valid. Once again, a measurement instrument can be reliable, but not it does not have to be valid. For example, a person may wish to measure a room. Believing one’s foot to be twelve inches in length, one steps across the room placing feet end on end. The process is repeated twenty times. Therefore, the conclusion might be that the length of the room is 240 inches. Later it is learned that the foot is only 11.5 inches in length. The person’s foot is a reliable measurement of the length of the room, that is, they get twenty foot-lengths every time, however, this is not a valid measurement of the room in standard inches.

Instrument Stability and Equivalence

Stability and equivalence are not identical terms. Both stability and equivalence are factors of reliability, but they refer to different ways in which reliability can be reduced. Stability refers to changes in the items that are being observed while equivalence refers to variations in how they are being observed.

Rating and Ranking Scales

Rating scales can be used easily to judge properties of objects against specified criteria without comparison to other objects, whereas ranking scales classify objects by requiring a choice between the objects. Rating scales can be time-consuming to construct, whereas the procedure for ranking objects can be difficult to administer. Rating objects can often be influenced by poor or careless judgments by the person doing the rating--the halo effect, leniency, and central tendency noted by Cooper and Schindler (2003) can all be found in the normal personnel review process, for example.

Ranking objects against one another may eliminate the need for development of an absolute set of criteria to judge by as required by rating scales, but judging more than two objects at a time may lead to misinterpretation of the exact level of attitude expressed for any one object. This is especially true when some of the items are equally matched in the positive attitude that can be evoked from the respondent. A perfect, yet simple, example of this vote splitting problem was seen in the 1992 U.S. Presidential elections where Bill Clinton, George Bush, and Ross Perot each received 41%, 37%, and 19% of the vote, respectively. One interpretation is that Clinton was the most favored candidate (i.e., had a mandate), whereas another interpretation is that Bush and Perot were similar enough in ideals that they were campaigning for the same votes, and therefore, had one of them not run, the remaining one would have been elected handily.

Likert and Differential Scales

Likert scales (i.e., a summated scale) can typically be created more easily than differential scales such as the Thurstone Differential Scale. This creation advantage that the Likert scale has may be due to the Likert being constructed through Item Analysis, rather than differential scale which is created through consensus. Differential scales are complicated and expensive so other methods like the Likert scale are preferable for business research. For some types of research, the cost of having many knowledgeable judges agree on the rating of items included on the differential scale may produce better results than using the pre-testing process of a summated scale.

Unidimensonal and Multidimensional Scales

Unidimensional scales cumulatively measure attitudes that are extreme to less extreme, so it is possible to understand which individual items the respondent judged positively or negatively. However, not all concepts and constructs can be adequately assessed in this way because the items being studied may be correlated in more than one way, that is, they are multidimensional. Construction of a scale using the Semantic Differential method may indeed reveal more dimensions and so the measurement instrument must be narrowly focused to measure a concept unidimensionally. Somewhat ethereal concepts such as organizational image or brand image may be difficult to assess with a cumulative (i.e., unidimensional) scale.

Methods of Survey Measurement Scale Construction

The five methods of scale construction are the arbitrary approach, the consensus, item analysis, the cumulative approach, and factor scales (Cooper & Schindler, 2003). The arbitrary approach is a commonly used method that describes scale construction that occurs as the measurement instrument is being developed. Responses are scored based on the subjective judgment of the researcher, which may be good or bad. This quick and inexpensive method can be very powerful in the hands of an experienced researcher.

The consensus approach involves scale construction by a panel of judges (i.e., presumably knowledgeable) who weigh each item for relevance, clarity, and level of attitude it expresses. The panel of judges can produce a better scale for the measurement instrument, but it does so at the expense of time and money.

The item analysis approach to scale construction actually analyzes how well the items included in the measurement instrument discriminate between indicants of interest. The values assigned can then be totaled to measure the respondent’s total score. The Likert is a one common and very effective Item Analysis (i.e., summated) scale.

The cumulative approach to scale construction ranks items based on the degree to which they represent a certain position held about the item of measurement. In particular, the Guttman scalogram attempts to measure unidimensionality, that is, if the responses fall into a pattern of the most extreme position also including endorsements of all positions that are less extreme.

The factor scale is based on the correlation between items and the common factors that they share. Factor scales deal with the problem of there being more than one dimension to an attitude toward an item and the fact that there may be more dimensions not yet known. The appropriateness of each of the five methods of scale construction depends on the research objective and the type of measurement instrument, etc. Scale construction through the arbitrary and item analysis methods may be less expensive and completely adequate for some topics. Scale construction through the consensus, cumulative, and factor scaling methods are more time consuming, expensive, and would be more appropriate for measurements involving complex judgments.

The impact of the scale construction technique chosen on the scaling design could be as important as the information that is hoped to be derived from the research. The reason is that the expense of scale construction may be too costly or time consuming, or the scale construction method may not fit the measurement of the judgments being made by the respondent. Therefore, the selection of the scale construction technique is important.

Probability Sampling and Nonprobability Sampling

A probability sample is necessary when a true cross section of the population is needed to properly achieve research objectives. It is most likely that the sampling phase of a project requiring a probability sample would need to be funded because of the expense in pursuing the need to know the population members from which to draw, the personnel involved, and the larger size of the sample to produce a sample with the desired degree of confidence.

A nonprobability sample is sufficient when it is not necessary to comprehend a single item represented in the whole population. Clearly, if the researcher is seeking only to gain a “feel” for the level of presence of certain items within a population gathered with a nonprobability sampling technique such as the judgment or quota sampling methods, the sampling procedure will be less expensive than a probability sampling and will probably suffice. A good example of this would be conducting sampling for exploratory research.

Random, Cluster, and Stratified Samples

A simple random sample is most appropriate when a list of the population elements is known and can be easily randomized. The simplicity with which the sampling procedure can be established and executed is a major advantage. A cluster sampling is most appropriate when the expense of a simple random sampling exceeds the budget and clusters that are internally heterogeneous and externally homogenous can be identified. In other words, obtaining a list of population elements that are naturally grouped into heterogeneous clusters that can be sampled may be easiest. A stratified sample is appropriate if a complete population list that would facilitate a simple random sampling is unavailable, and preferable, if the population can be stratified on the primary variable that is being studied. A stratified sample can also improve statistical efficiency, if it results in the elements within the stratum being more alike each other (i.e., homogeneous) and different from the other stratum (i.e., heterogeneous).

Finite Population Adjustment Factor

The finite population adjustment factor applies when the sample size is five percent or more of the total population and it can be used to reduce the required size of a sample to produce a desired level of precision (i.e., confidence). If the size of the sample is a budget concern, then adjusting the size of the sample with the finite population adjustment factor may be appropriate.
Disproportionate Stratified Probability Sample

The statistical efficiency of the entire sample can sometimes be increased if a larger sample is taken within one of the strata. This may be a good idea if the stratum is larger, more variable internally, and if the whole process of sampling is less expensive within the stratum.


Reference

Cooper, D.R., & Schindler, P.S. (2003). Business research methods, (8th ed.). Boston, MA: McGraw Hill.

Sunday, December 10, 2006

Jeremy Bentham: Utilitarianism and General Community Interests

Bentham summarized Smith’s ideas of self-interest based on utility as the pleasure-pain principle (Ekelund and Hébert, 1997). This principle proposes that an individual tries to acquire benefits and avoid costs based on two primary human motivational factors: pain and pleasure. This principle served as the basis of Jeremy Bentham’s theory of utility.

Bentham disagreed with Smith’s ideas that there could be an actual harmony of human interests. For example, crime maximizes the self-interest of the un-apprehended criminal over the general interest, and therefore, the best way to maximize utility for everyone is to implement the principle, “the interest of each individual must be identified with the general interest.” In other words, what is good for everyone could be shown to be best for each individual, generally speaking, and this is what Bentham thought should happen rather than each individual maximizing their own interests.

Bentham’s ideas of utility have application when considering the enactment of legislation and developing public policy, at least on the surface. Each individual interest is weighted equivalently in that the subtraction of one person’s benefit means addition of another’s interest. Both the rich man and the poor man have an equal weight applied to their interest. Furthermore, the sum of each individual interest comprises the general interest of the community under utilitarianism. Government action which increases benefits to the entire community at the expense of one portion is justified by Bentham’s model because the utility of all has been increased while the interests of only a few individuals have been minimized.

The specific mechanics for increasing the general interest in the area of economic welfare were embodied in the Felicific Calculus. Here we find many difficulties with summing individual interests to arrive at a measure of total general interests: (1) Pain and Pleasure among individuals were considered to be identical; (2) Pleasures of the mind were treated as equivalent to those of the body; (3) Money is a common denominator of tradeoffs between pain and pleasure; (4) Fallacy of composition in grouping interests may result in what seems like a valid aggregation, but which may actually not be in accordance with the will of the majority of individuals, thereby, actually lowering the general interest.

Although useful in discussing the objective of maximizing the utility of all involved, Utilitarianism suffers from too narrow a view of the range of human behavior and the corresponding impact on utility, especially utility of the general populace. Furthermore, the summation of individual interests into collective interests was highly prone to error because of the inaccuracies involved in engaging in interpersonal utility comparisons.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1997). A history of economic theory and method (4th ed.). New York: McGraw Hill.

Saturday, December 9, 2006

Employee Recruiting Methods: Advantages and Disadvantages

Various recruiting methods, such as help-wanted advertising, employment agencies, and employee referrals, etc., have inherent differences worth considering for hiring efficiently and effectively. In reality, there is a wide array of options open to human resources (HR) managers charged with external staffing responsibilities. As part of developing a recruitment strategy, the HR manager must determine the qualifications of the applicant, select recruitment sources and communication channels, determine how the candidate will be induced to join the organization, the message to be related to the media or agency for publication and, finally, the recruiters who screen the applicant will need to be selected and briefed. According to Milkovich and Boudreau (1996, p. 28), “There is very little evidence of the effectiveness of different recruitment methods for enhancing job performance and much of the evidence is mixed.” In other words, the relationship between how the candidate is identified by the organization and the how the candidate ultimately performs is not well understood. Regardless, distinct recruitment channels have advantages and disadvantages for recruiting various types of workers.

The channels available for communicating to applicants with advantages and disadvantages described, respectively:

  • Walk-in / Email-in contacts – An inexpensive but less useful method for professional, technical, manager, and supervisor positions;
  • Referrals from employees, vendors or customers – The recruit is less likely to leave in the first year, but recruits from referrals tend to reflect current age, race, and gender characteristics of current employee population, instead of diversity targets;
  • Internal staffing – This is a good source of applicants, especially for managerial positions, but internal staffing requires overhead of internal recruitment, selection, and separation processes; Still, it is possible that no internal candidate may have appropriate skill set to fill the position;
  • College recruiting – An active college recruiting program is a way to hire fresh talent that is recruited to promote long-term viability of organization, but the process is expensive to select candidates, and cultivate and maintain relationships with colleges and universities;
  • High schools / Vocational schools recruiting – The sheer number and close proximity of these schools make them less expensive but there are often basic skill deficiencies among such applicants;
  • Public employment agencies – Agencies that serve the public are widely available but these sources are often the best source for clerical workers, unskilled laborers, and production workers and technicians;
  • Private employment agencies/Search firms – search firms and headhunters are very effective in that they can respond to targeted needs for executives but the service is very expensive;
  • Professional associations – This can be important services for locating and networking with professionals and employers, which are often available, even through the Web, but with irregular meetings these are sometimes not readily available;
  • Web portals – There is the potential for very efficient information exchange for employers or employees in the recruitment or search process, but minorities may be under represented due to unequal access to Internet services;
  • Newspaper advertising – The most effective source of candidates for all job classifications, except for managers or supervisors, is newspaper advertising, but studies show that advertising often produces low performing employees and high rates of separation;
  • Immigrant recruitment – An important source of scientific and professional talent can be found through bringing employees in from foreign countries, but sponsors often required; moreover, legal restrictions exist;
  • Outsourcing or Off-shoring Agencies – significant professional, managerial, and technical talent can be secured by working with agencies specializing in outsourcing the business process to a foreign country or establishing independent business units in the target country.


Reference

Milkovich, G.T., & Boudreau, J.W. (1996). Human resources management (8th ed.), New York: Irwin.

Friday, December 8, 2006

Employee Variables that Help Explain Differences in Behavior and Performance

Individual variables are those attributes that are intrinsic to the individual and serve as a catalyst for the subsequent behaviors exhibited and performance level achieved. The link between employee attributes and their overall performance, especially with respect to co-workers and customers, is well known. A few of the individual differences that can influence behaviors include abilities and skills, family background, personality, perception, attitudes, attribution, learning capacity, age, race, sex, and experience. Because of the sheer volume of material that could be presented for all of these variables, we can briefly mention psychological variables such as perception, attitudes, and personality as being most important (Gibson, Ivancevich and Donnelly, 1994).

It is important for managers to understand how the complex subset of attributes possessed by individual employees can be affected by their managerial style. Employee behavior is generally well-reasoned within the sphere of perception. Some individual attributes such as age, race, and gender cannot be altered -- therefore, an understanding of individual differences that can be altered/affected (e.g., perception, attitudes, and personality) and their role in job effectiveness is a crucial part of a manager's job. Managers can help adjust the employee's environment to emphasize best their individual attributes. Moreover, although it seems unlikely that an individual's behavior can be substantially altered, many other psychological variables associated with the job can be adjusted to fit the individual and thereby encourage peak performance. In sum, it is task of managers to ascertain how performance and behavior vary with psychological differences among employees.

Reference

Gibson, J.L., Ivancevich, J.M., & Donnelly, J.H., Jr. (1994). Organizations: Behavior, structure, processes (8th ed.). Boston, MA: Irwin.

Thursday, December 7, 2006

Conflict-of-Law Rules Outside Business Contracts

Disclaimer: The following case background and solution are meant for educational purposes only. I am not a lawyer and this is not legal advice.

Beattey v. College Centre of Finger Lakes, District Court of Appeal of Florida, Fourth District, 613 So. 2d 52 (1992).


Case Background

“Richard Beattey, Jr. was driving in the Bahamas when he collided head-on with a truck owned by College Centre of Finger lakes and driven by its employee, Zeakes. College Centre was a New York corporation with an office in the Bahamas. Two passengers in Beattey’s vehicle, both Indiana residents, died at the scene. Beattey was flown to Fort Lauderdale, Florida, but died en route from his injuries. Beattey’s parents, Indiana residents at the time of the accident and representatives of their son’s estate, filed this action for wrongful death in a Florida state court against College Centre. College Centre conceded that it was liable for the negligence of its driver, Zeakes. If Bahamian law applied, the Fatal Accidents Act of the Bahamas would control this case. The act limits recovery in wrongful death action to funeral expenses. Under the law of any U.S. state, plaintiffs could sue for much more. The trial court applied a ‘significant relations’ conflict-of-law test and found that the Bahamas had the most significant interest in the case and that Bahamian law should thus be applied. The Beatteys appealed the decision, arguing that the court did not apply the conflict-of-law test appropriately” (Meiners, Ringleb, and Edwards, 2000, p. 54-55).

Conflict-of-Law

The majority of business contracts specify the state whose law will be applied to interpretation of the contract. The nature of the dispute and the absence of explicit law rule in the civil wrongful death action open the case for interpretation by the court based on enacted statutes specifying conflict-of-law rules in that jurisdiction. The case demonstrates how understanding of significant interest of the state affects which state’s laws are applied by the court.

The first issue would be for the plaintiff’s legal representation to determine what court(s) have subject-matter jurisdiction. Once subject-matter jurisdiction is established and the plaintiff (i.e., Richard Beattey’s parents) files a lawsuit, then the court must decide whether it has territorial jurisdiction. In the case of Beattey v. College Centre of Finger Lakes, we can surmise that a Florida state court concluded that it would hear the case, because the Richard Beattey, Jr., the decedent, actually died in Florida.

The second issue to be resolved would be to determine the nature of the dispute. If the dispute had been over a contract, then the law of the state in which the contract was made would determine the interpretation. For example, in the case of an insurance contract with regard to Florida, the state in which the contract was completed (i.e., the decedent passed away) determines the interpretation. However, the Florida state court determined that this dispute involved a tort, which due to its application of the significant relations conflict-of-law test would call for application of the substantive law of the territory in which the tort was committed (i.e., The Bahamas).

In the absence of a clear interpretation of the above, some courts apply the law from the jurisdiction that has the most significant interests at stake in a resolution of the dispute. Regardless, the courts attempt to arrive at a balanced position, “They [the courts] try to account for the interests of the parties in the fair resolution of the dispute, for the interest of the governments in the effective application of their laws and the policy rationales upon which they are based, and for the benefits that result from the ability of citizens to predict the legal consequences of their actions” (Meiners, Ringleb, and Edwards, 2000, p. 54). The District Court of Appeal of Florida, Fourth District on appeal chose to apply Restatement (Second) of Conflict of Laws in determining the state with the most significant of the competing interests.

With regard to the result under the traditional rule, the action of Zeakes, the driver of the truck, in colliding head on with Richard Beattey, Jr., et al was a tort that occurred in the Bahamas. Under the traditional application of the conflict-of-law rules, the result of this case would have been that the substantive law applied would be Bahamian law. In addition, under Bahamian law remedies for wrongful death would be limited to recovery of funeral expenses alone from the defendant.

West Germany, the then current place of residence of Beattey’s parents, had the weakest link to the parties this case. Indiana had no strong link due to the fact that decedent and his parents were no longer residents and that the other passengers were not a part of the lawsuit. Florida had a relationship with the case because the loss of life, possible consummation of any life insurance policy, and police investigation occurred within its jurisdiction. The Bahamas had an interest in the resolution of the case, because the tort and resulting property damage actually occurred within its jurisdiction. However, New York had the most significant relationship because the College Centre Corporation, the corporation’s underwriter, and the decedent were residents. Furthermore, all three parties could reasonably expect to be protected by both U.S. and New York State law.

Of all the jurisdictions New York had the most significant relationships with the parties to the dispute: (1) The decedent was a resident of New York and therefore his estate was going to probate in New York; (2) The defendant was a corporation organized in the State of New York; (3) The insurance company that was indemnifying the defendant issued the policy in the State of New York based on New York actuarial data. What is fascinating about this case is that the Florida state court failed to see the significance of the relationships with the state of New York. One wonders what the outcome would have been if the Florida state court refused to hear the dispute or if Beattey’s parents would have filed the lawsuit in New York in the first place.

Reference

Meiners, R.E., Ringleb, A.H., & Edwards, F.L. (2000). The legal environment of business (7th Ed.). New York: West Legal Studies in Business.

Wednesday, December 6, 2006

Adam Smith: Father of Economics

The work of Adam Smith was unique in that he built an entire system of economic thought instead of solving a single problem (Ekelund and Hébert, 1997). The main features of Smith’s analysis were (1) a division of labor, (2) analysis of price and allocation, and (3) the nature of economic growth. Adam Smith’s work described in The Wealth of Nations helped define economics as a separate discipline of scientific inquiry.

Smith sought to resolve the issue of an individual’s responsibility to the state and the state’s role in dealing with the individual. Should or could the state plan what might be the optimal economic outcomes for individuals? Smith argued that the combined actions of the invisible hand, the workings of G-d, and Laissez Faire combine to produce outcomes more advantageous than those proposed by a central planning authority.

One of Adam Smith’s most enduring concepts was that of a natural division of labor. The division of labor is a starting point for economic growth. If the worker performed fewer tasks and concentrated on developing an increased skill level, time is saved and, eventually, economies of scale result. As each process is defined and refined in terms of a single function, this same function can be automated much more easily. The primary force behind the division of labor is closely linked to an exchange theory of value; Smith argued that human nature includes a natural propensity to exchange and each person must have an excess of some good trade, presumably labor. Smith saw market forces in society driven by desire of liberty and inexpediency of control. In another era, Smith could have been a cognitive psychologist with his observations that humans are most interested in what is perceived to be nearest in space or time, least interested in what is at a perceptual distance, and the desire of man to improve his condition.

Smith further explained, borrowing from Hume, how labor contributed to value and therefore was partly a measure of value. Smith discussed real and nominal prices and observed the difference between the two. His discussion of prices included a description of a complete market model of equilibrium forces. Smith understood that many products had component parts that also had differing values determined by production costs and market evaluation. On the subject of wages, he laid the groundwork for the wages-fund theory. Smith detailed a very useful set of principles that describe the inequities of wages and profits from employment. Smith described rents as prices determined and heavily influenced by the next best alternative use for the non-labor resource.

Adam Smith detailed a blueprint for economic growth that framed the discussions throughout the classical period from 1776 to 1873 (Ekelund and Hébert, 1997). His blueprint rested on the idea that the division of labor is a natural tendency of society unless disturbed by outside forces and that it leads to more capital being generated and accumulated, which in turn leads to refinement and expansion of the division of labor. Smith designed a system arising from his own ideas and the previously disjointed theoretical pieces of other economists. Smith’s brilliance as a builder of an economic system is why The Wealth of Nations is still widely read today.

Reference

Ekelund, R. B., Jr., & Hébert, R. F. (1997). A history of economic theory and method (4th ed.). New York: McGraw Hill.

Tuesday, December 5, 2006

Credibility and Effective Leadership

Gaining and maintaining leadership-related credibility is an essential element in the process of getting others to hear and understand how the leader’s influence will affect them. It is not an overstatement to suggest that credibility is the first cousin of believability. Credibility includes such attributes as knowledge, skill, trustworthiness, dynamism, and expertise. In many respects, leadership credibility is in the eyes of the beholders; that is, those stakeholders and constituencies who are willing and able to be led toward completing a task perceive the leader’s ability to lead in various ways.

The leader must earn leadership credibility by showing that they have both the background and the vision to lead the group toward the common goal. If credibility or believability cannot be perceived in the leader’s presence and actions, the message to constituents will not be both received and followed. Being a foundational element to leadership, credibility must be guarded, because it is often both hard to earn and easy to lose. If credibility wants, it can be difficult to buttress it. For example, if a leader carelessly utters a half-truth or provides a professional opinion that exceeds how they have been systematically trained, credibility could be in jeopardy.

In most cases, the ability to function effectively as a leader substantially depends upon achieving high credibility with those who are to be lead. Followers must be wiling and able to believe the leader, especially when the leader is articulating a position about the future of the group. Forward-looking statements that do not seem true or that do not unfold to be true can dilute hard-earned credibility; there is a tenuous give and take between the leaders being accurate on one hand but positive and visionary on the other hand.

Reference

Kouzes, J.M., & Posner, B.Z. (2002). The leadership challenge (3rd ed.). San Francisco, CA: Jossey-Bass

Monday, December 4, 2006

Project Management Glossary

A Guide to the Often Confusing World of Project Management Software Terminology (Last Update: 12/4/06, by Dave Wagner)


Note: I first published this glossary in 1993 to the project management community on the Internet, when I was Vice President of Marketing for a software company in charge of technical publications. I was the editor for this glossary but other (unknown) individuals contributed to the document, including Marilyn Cantey, who has since retired. The project management glossary can now be found on dozens of websites and the company that bought the original product documents on which this glossary was based has since moved on. In sum, it seemed wise to take control of the document again, correct errors that have crept into the document over the years, and make it available to a new generation of MBAs and project management professionals.

Terms Defined (Click below)

Activity
Activity Duration
Actual_Dates
Baseline_Schedule
Calendars
Control
Critical Activity
Calculate Schedule
Critical Path
Duration
Early Finish
Early Start
Elapsed Time
Finish Float
Finishing Activity
Finish-to-Finish Lag
Finish-to-Start Lag
Float
Forced Analysis
Free Float
Gantt (Bar) Chart
Hammocks
Histogram
Lag
Late Finish
Late Start
Micro-Scheduling
Milestones
Multi-Project Analysis
Negative Float
Network Analysis
Network Diagram
Parallel Activities
Path
Positive Float
Precedence Notation
Predecessor
PERT
Project
Rescheduling
Resource
Resource Based Duration
Resource Leveling
Scheduling
Sequence
Slippage
Start Float
Start-to-Start Lag
Starting Activity
Sub-Critical Activity
Subproject
Successor
Super-Critical Activity
Target Finish -- Activity
Target Finish -- Project
Target Start -- Activity
Total Float
Work Breakdown Structure (WBS)
Work Flow
Work Load
Work Units
Zero Float


Activity

An activity is an individual task needed for the completion of a project. It is the smallest discrete block of time and resources typically handled by PM software. It is a single task that needs to be done in a project. Multiple activities are related to each other by identifying their immediate predecessors. Solitary activities, which have no predecessors or successors, are allowed. Most PM software packages are precedence-based systems that analyze schedules based on the activity relationships that are specified. Activities can also be called work packages, tasks, or deliverables.

Activity Duration

Activity duration specifies the length of time (hours, days, weeks, months) that it takes to complete an activity. This information is optional in the data entry of an activity. Workflow (predecessor relationships) can be defined before durations are assigned. Activities with zero duration are considered milestones (milestone value of 1 to 94) or hammocks (milestone value of 95 to 99).

Actual Dates

Actual dates are entered as the project progresses. These are the dates that activities really started and finished as opposed to planned or projected dates.

Baseline Schedule

The baseline schedule is a fixed project schedule. It is the standard by which project performance is measured. The current schedule is copied into the baseline schedule that remains frozen until it is reset. Resetting the baseline is done when the scope of the project has been changed significantly. At that point, the original or current baseline becomes invalid and should not be compared with the current schedule.

Calendars

A project calendar lists time intervals in which activities or resources can or cannot be scheduled. A project usually has one default calendar for the normal workweek (Monday through Friday), but may have other calendars as well. Each calendar can be customized with its own holidays and extra workdays. Resources and activities can be attached to any of the calendars that are defined.

Control

Control is the process of comparing actual performance with planned performance, analyzing the differences, and taking the appropriate corrective action.

Critical Activity

A critical activity has zero or negative float. This activity has no allowance for work slippage. It must be finished on time or the whole project will fall behind schedule. (Non-critical activities have float or slack time and are not in the critical path. Super-critical activities have negative float.)

Calculate Schedule

The Critical Path Method (Calculate Schedule) is a modeling process that defines all the project's critical activities that must be completed on time. The Calc tool bar button on the Gantt and PERT (found in most GUI-based PM software) windows calculates the start and finish dates of activities in the project in two passes. The first pass calculates early start and finish dates from the earliest start date forward. The second pass calculates the late start and finish activities from the latest finish date backwards. The difference between the pairs of start and finish dates for each task is the float or slack time for the task (see FLOAT). Slack is the amount of time a task can be delayed without delaying the project completion date. A great advantage of this method is the fine-tuning that can be done to accelerate the project. Shorten various critical path activities, check the schedule to see how it is affected by the changes. By experimenting in this manner, the optimal project schedule can be determined.

Critical Path

There may be several paths within one project. The critical path is the path (sequence) of activities that represent the longest total time required to complete the project. A delay in any activity in the critical path causes a delay in the completion of the project. There may be more than one critical path depending on durations and workflow logic.

Duration

Duration is the length of time needed to complete an activity. The time length can be determined by user input or resource usage. Activities with no duration are called Milestones that act as markers (see MILESTONES). Estimating durations for future activities is very difficult. It is recommended that the largest duration possible be used to account for possible delays.

Early Finish

The Early Finish date is defined as the earliest calculated date on which an activity can end. It is based on the activity's Early Start that depends on the finish of predecessor activities and the activity's duration. (See EARLY START) Most PM software calculates early dates with a forward pass from the beginning of the project to the end. This is done by selecting ANALYZE & PROCESS REPORTS from the Report pull-down menu.

Early Start

The Early Start date is defined as the earliest calculated date on which an activity can begin. It is dependent on when all predecessor activities finish. Most PM software calculates early dates with a forward pass from the beginning of the project to the end.

Elapsed Time

Elapsed time is the total number of calendar days (excluding non-work days such as weekends or holidays) that is needed to complete an activity. It gives a "real world view" of how long an activity is scheduled to take for completion.

Finish Float

Finish float is the amount of excess time an activity has at its finish before a successor activity must start. This is the difference between the start date of the predecessor and the finish date of the current activity, using the early or late schedule. (Early and Late dates are not mixed.) This may be referred to as slack time. All floats are calculated when a project has its schedule computed.

Finishing Activity

A finishing activity is the last activity that must be completed before a project can be considered finished. This activity is not a predecessor to any other activity -- it has no successors. Many PM software packages allow for multiple finish activities.

Finish-To-Finish Lag

The finish-to-finish lag is the minimum amount of time that must pass between the finish of one activity and the finish of its successor(s). If the predecessor's finish is delayed, the successor activity may have to be slowed or halted to allow the specified period to pass. All lags are calculated when a project has its schedule computed. Finish-to-Finish lags are often used with Start-to-Start lags.

Finish-To-Start Lag

The finish-to-start lag is the minimum amount of time that must pass between the finish of one activity and the start of its successor(s). The default finish-to-start lag is zero. If the predecessor's finish is delayed, the successor activity's start will have to be delayed. All lags are calculated when a project has its schedule computed. In most cases, Finish-to-Start lags are not used with other lag types.

Float

Float is the amount of time that an activity can slip past its duration without delaying the rest of the project. The calculation depends on the float type. See START FLOAT, FINISH FLOAT, POSITIVE FLOAT, and NEGATIVE FLOAT. All float is calculated when a project has its schedule computed.

Forced Analysis

Most PM software can force schedule analysis where a project is re-analyzed even if no new data has been entered. The feature is used for an analysis on the project by itself after it has been analyzed with other projects in multi-project processing (or vice versa). A leveled schedule may also be removed by forcing schedule analysis.

Free Float

Free float is the excess time available before the start of the following activity, assuming that both activities start on their early start date. Free float is calculated in the following way: FREE FLOAT = EARLIEST START OF FOLLOWING ACTIVITY - EARLIEST START OF PRESENT ACTIVITY - DURATION OF PRESENT ACTIVITY On the activity's calendar, free float is the length of time from the end of the activity to the earliest Early Start date from among all of its successors. If the activity has no successors, the project finish date is used. Since free float is meaningless for hammocks, it is set to zero. For the common case where all lags are finish-to-start lags of zero, the free float represents the number of workdays that an activity can be delayed before it affects any other activity in the project.

Example: The current activity has an Early Start of March 1st and a duration of 3 days. The succeeding activity has an Early Start of March 7th. Assuming everyday is a work day, then: FREE FLOAT = March 7 - March 1 - 3 days = 6 days - 3 days = 3 days Free float can be thought of as the amount of time an activity can expand without affecting the following activity. If the current activity takes longer to complete than its projected duration and free float combined, the following activity will be unable to begin by its earliest start date.

Gantt (Bar) Chart

A Gantt chart is a graphic display of activity durations. It is also referred to as a bar chart. Activities are listed with other tabular information on the left side with time intervals over the bars. Activity durations are shown in the form of horizontal bars.

Hammocks

A hammock groups activities, milestones, or other hammocks together for reporting. A hammock's milestone number ranges from 95 to 99. This allows for five levels of summation. For example, two hammocks at the 95 level can be combined in a 96 level hammock. Any numbers of hammocks are allowed within the five levels for a project. Most PM software calculates the duration of a hammock from the early and late dates of the activities to which it is linked.

Histogram

A histogram is a graphic display of resource usage over a period. It allows the detection of overused or underused resources. The resource usage is displayed in colored vertical bars.
The ideal level for a resource on the screen is indicated by another color (typically red). The vertical height is produced by the value specified in the maximum usage field of the Resource Label window. (The printed histogram uses a horizontal line to display the maximum usage set in the Resource Label window.) If the resource bar extends beyond the red area for any given day, resources need to be leveled (or spread out) for proper allocation. The resource histograms should be checked after resources are assigned to the project activities.

Lag

Lag is the time delay between the start or finish of an activity and the start or finish of its successor(s). See FINISH-TO-FINISH LAG, FINISH-TO-START LAG, and START-TO-START LAG.

Late Finish

Late Finish dates are defined as the latest dates by which an activity can finish to avoid causing delays in the project. Many PM software packages calculate late dates with a backward pass from the end of the project to the beginning. This is done by selecting ANALYZE & PROCESS REPORTS from the Report pull-down menu.

Late Start

Late Start dates are defined as the latest dates by which an activity can start to avoid causing delays in the project. Many PM software packages calculate late dates with a backward pass from the end of the project to the beginning.

Micro-Scheduling

Micro-scheduling is the scheduling of activities with duration less than one day (in hours or fractional days).

Milestones

A milestone is an activity with zero duration (usually marking the end of a period).

Multi-Project Analysis

Multi-project analysis is used to analyze the impact and interaction of activities and resources whose progress affects the progress of a group of projects or for projects with shared resources or both. Multi-project analysis can also be used for composite reporting on projects having no dependencies or resources in common.

Negative Float

Negative float indicates activities must start before their predecessors finish in order to meet a Target Finish date. All float is calculated when a project has its schedule computed. Negative float occurs when the difference between the late dates and the early dates (start or finish) of an activity are negative. In this situation, the late dates are earlier than the early dates. This can happen when constraints (Activity Target dates or a Project Target Finish date) are added to a project.

Network Analysis

Network analysis is the process of identifying early and late start and finish dates for project activities. This is done with a forward and backward pass through the project. Many PM software tools will check for loops in the network and issue an error message if one is found. The error message will identify the loop and all activities within it.

Network Diagram

A network diagram is a graphic representation of activity sequence and relationships. Activity boxes are connected together with one-way arrows to indicate precedence. The first activity is placed on the left side of the diagram with the last activity on the right side. Activity boxes are usually placed at different levels (not in a single row) to accommodate activities that are done simultaneously.

Parallel Activities

Parallel activities are two or more activities than can be done at the same time. This allows a project to be completed faster than if the activities were arranged serially in a straight line.

Path

A path is a series of connected activities. Refer to CRITICAL PATH METHOD for information on critical and non-critical paths.

Positive Float

Positive float is defined as the amount of time that an activity's start can be delayed without affecting the project completion date. An activity with positive float is not on the critical path and is called a non-critical activity. Most software packages calculate float time during schedule analysis. The difference between early and late dates (start or finish) determines the amount of float. Float time is shown at the end or the beginning of non-critical activities when a bar chart reflects both early and late schedules. Float is shown on many of the tabular reports.

Precedence Notation

Precedence notation is a means of describing project workflow. It is sometimes called activity-on-node notation. Each activity is assigned a unique identifier. Workflow direction is indicated by showing each of the activity's predecessors and their lag relationships. Graphically, precedence networks are represented by using descriptive boxes and connecting arrows to denote the flow of work.

Predecessor

An activity that must be completed (or be partially completed) before a specified activity can begin is called a predecessor. The combination of all predecessors and successors (see SUCCESSOR) relationships among the project activities forms a network. This network can be analyzed to determine the critical path and other project scheduling implications.

Program Evaluation and Review Technique (PERT)

PERT is a project management technique for determining how much time a project needs before it is completed. Each activity is assigned a best, worst, and most probable completion time estimate. These estimates are used to determine the average completion time. The average times are used to figure the critical path and the standard deviation of completion times for the entire project.

Project

A project is a one-time effort to accomplish an explicit objective by a specific time. Each project is unique although similar projects may exist. Like the individual activity, the project has a distinguishable start and finish and a time frame for completion. Each activity in the project will be monitored and controlled to determine its impact on other activities and projects. The project is the largest discrete block of time and resources handled by most PM software.

Rescheduling

Rescheduling is a feature of most PM software that recalculates the start and finish dates of all uncompleted activities based upon progress as of a specified date.

Resource

A resource is anything that is assigned to an activity or needed to complete an activity. This may include equipment, people, buildings, etc.

Resource Based Duration

Resource based duration provides the option to determine activity duration, remaining duration, and percent complete through resource usage. The resource requiring the greatest time to complete the specified amount of work on the activity will determine its duration. You may change the duration mode for an activity at any time. This feature may not be used without values in the Resource Usage fields.

Resource Leveling

Resource leveling provides the capability to adjust project schedules in order to minimize the peaks in daily resource usages. This is usually done when resources are over-allocated. Activities are moved within their available float to produce a new schedule. Resources and projects may have leveling priorities. Some activities may not have any rescheduling flexibility due to lack of float. Either resource-constrained or schedule-constrained leveling may be selected.

Scheduling

Scheduling is the process of determining when project activities will take place depending on defined durations and precedent activities. Schedule constraints specify when an activity should start or end based on duration, predecessors, external predecessor relationships, resource availability, or target dates.

Sequence

Sequence is the order in which activities will occur with respect to one another. This establishes the priority and dependencies between activities. Successor and predecessor relationships are developed in a network format. This allows those involved in the project to visualize the workflow.

Slippage

Slippage is the amount of slack or float time used up by the current activity due to a delayed start. If an activity without float is delayed, the entire project will slip.

Start Float

Start float is the amount of excess time an activity has between its Early Start and Late Start dates.

Start-To-Start Lag

Start-to-start lag is the minimum amount of time that must pass between the start of one activity and the start of its successor(s).

Starting Activity

A starting activity has no predecessors. It does not have to wait for any other activity to start. Many PM software packages permit multiple start activities if needed.

Sub-Critical Activity

A sub-critical activity has a float threshold value assigned to it by the project manager. When the activity reaches its float threshold, it is identified as being critical. Since this type of criticality is artificial, it normally does not affect the project's end date.

Subproject

A subproject is a distinct group of activities that comprise their own project that in turn is a part of a larger project. Subprojects are summarized into a single activity to hide the detail.

Successor

A successor is an activity whose start or finish depends on the start or finish of a predecessor activity. Refer to PREDECESSOR for related information.

Super-Critical Activity

An activity that is behind schedule is considered super-critical. It has been delayed to a point where its float to calculated to be a negative value. The negative float is representative of the number of units an activity is behind schedule.

Target Finish -- Activity

Target Finish is the user's imposed finish date for an activity. A Target Finish date is used if there are pre-defined commitment dates. Most PM software will not schedule a Late Finish date later than the Target Finish date. Your favorite PM software may alert you to negative float that occurs when a Late Finish date is later than a Target Finish date. This is caused by the duration of predecessors that makes it impossible to meet the Target Finish date. The negative float can be eliminated by reducing the duration of predecessors or extending the Target Finish date.

Target Finish -- Project

A user's Target Finish date can be imposed on a project as a whole. A Target Finish date is used if there is a pre-defined completion date. Most PM software will not schedule any Late Finish date later than the Target Finish date. See TARGET FINISH ACTIVITY on how to deal with negative float.

Target Start -- Activity

Target Start is an imposed starting date on an activity. Most PM software will not schedule an Early Start date earlier than the Target Start date.

Total Float

Total float is the excess time available for an activity to be expanded or delayed without affecting the rest of the project -- assuming it begins at its earliest time. It is calculated using the following formula: TOTAL FLOAT = LATEST FINISH - EARLIEST START – DURATION.

Work Breakdown Structure (WBS)

The WBS is a tool for defining the hierarchical breakdown of responsibilities and work in a project. It is developed by identifying the highest level of work in the project. These major categories are broken down into smaller components. The subdivision continues until the lowest required level of detail is established. These end units of the WBS become the activities in a project. Once implemented, the WBS facilitates summary reporting at a variety of levels.

Work Flow

Workflow is the relationship of the activities in a project from start to finish. Workflow takes into consideration all types of activity relationships.

Work Load

Workload is the amount of work units assigned to a resource over a period.

Work Units

A work unit is the measurement of resources. For example, people as a resource can be measured by the number of hours they work.

Zero Float

Zero float is a condition where there is no excess time between activities. An activity with zero float is considered a critical activity. If the duration of any critical activity is increased (the activity slips), the project finish date will slip.