Author: EconomicsLive
Classical Linear Regression Model [CLRM/CNLRM]
CNLRM (Classical Normal Linear Regression Model)
Assumption 1: Linear regression model: The regression model is linear in the parameters, as shown as
Yi = β1 + β2Xi + ui
Assumption 2: X values are fixed in repeated sampling. Values taken by the regressor X are considered fixed in repeated samples. More technically, X is assumed to be non stochastic.
Assumption 3: Zero mean value of disturbance ui. Given the value of X, the mean, or expected, value of the random disturbance term ui is zero. Technically, the conditional mean value of ui is zero. Symbolically, we have
E(ui Xi) = 0 .
Assumption 4: Homoscedasticity or equal variance of ui. Given the value of X, the variance of ui is the same for all observations. That is, the conditional variances of ui are identical. Symbolically, we have
var (ui Xi) = E[ui − E(ui Xi)]2
= E(ui2 Xi ) because of Assumption 3
= σ2
where var stands for variance.
Assumption 5: No autocorrelation between the disturbances. Given any two X values, Xi and Xj (i j ), the correlation between any two ui and uj (i ≠ j ) is zero. Symbolically,
cov(ui,uj Xi,Xj) = E{[ui −E(ui)]Xi}{[uj −E(uj)]Xj}
= E(ui  Xi)(uj  Xj)
=0
where i and j are two different observations and where cov means covariance.
Assumption 6: Zero covariance between ui and Xi, or E(uiXi) = 0. Formally,
cov (ui, Xi) = E [ui − E(ui)][Xi − E(Xi)]
= E [ui (Xi − E(Xi))] since E(ui) = 0
= E(uiXi) − E(Xi)E(ui) since E(Xi) is nonstochastic
= E(uiXi) since E(ui) = 0
= 0 by assumption
Assumption 7: The number of observations n must be greater than the number of parameters to be estimated. Alternatively, the number of observations n must be greater than the number of explanatory variables.
Assumption 8: Variability in X values. The X values in a given sample must not all be the same. Technically, var (X ) must be a finite positive number.
Assumption 9: The regression model is correctly specified. Alternatively, there is no specification bias or error in the model used in empirical analysis.
Assumption 10: There is no perfect multicollinearity. That is, there are no perfect linear relationships among the explanatory variables.
Integration By Parts
Integration By Parts

\({\ }\int_{}^{}{xe^{x}}dx\)

\({\ }\int_{}^{}{x^{2}\; e^{x}}dx\)

\({\ }\int_{}^{}{\frac{x^{2}}{\sqrt{1x}}}dx\)

\({\ }\int_{}^{}{\log \; x}{\ } dx\)

\({\ }\int_{}^{}{x^{2\; }\log \; x}{\ }dx\)

\({\ }\int_{}^{}{\log \; x^{3}}{\ }dx\)

\({\ }\int_{}^{}{x\; \log \; x^{2}}{\ }dx\)

\({\ }\int_{}^{}{x^{3}\; \log \; x^{}}{\ }dx\)

\({\ }\int_{}^{}{x^{2}\; \left( \log x \right)^{2}}dx\)

\({\ }\int_{}^{}{x^{3}\; e^{x}}{\ }dx\)

\({\ }\int_{}^{}{x\sqrt{2x\; +1}}{\ }dx\)

\({\ }\int_{}^{}{\left( x+3 \right)\; \left( x+1 \right)^{\frac{1}{2}}}{\ }dx\)

\({\ }\int_{}^{}{\log \; x^{x}}{\ }dx\)

\({\ }\int_{}^{}{\log \; \left( x+1 \right)}{\ }dx\)

\({\ }\int_{}^{}{x^{2}e^{x}}{\ }dx\)

\({\ }\int_{}^{}{x^{5}\; e^{x^{2}}}dx\)
Integration by Substitution
Integration by Substitution
 \({\ }e^{4x}\)
 \({\ }8x^{x^{2}+7}\)
 \({\ }4xe^{2x}\sqrt{x^{3}+3}\)
 \({\ }4x\sqrt{x^{2}+8}\)
 \({\ }xe^{x^{2}}\)
 \({\ }\frac{4x\; \; 6}{x^{2}\; 3x+\; 7}\)
 \({\ }\frac{3x^{5}\; +\; 1}{x^{6}+\; 2x}\)
 \({\ }\frac{2ax\; +\; b}{\left( ax^{2}+bx+c \right)^{2}}\)
 \({\ }\frac{2x}{\sqrt{x^{2}+4}}\)
 \({\ }\frac{12x+8}{3x^{2}+4x+8}\)
 \({\ }\frac{\left( \log x \right)^{3}}{x}\)
 \({\ }\frac{x}{\left( x^{2}+a^{2} \right)^{n}}\)
 \({\ }\int_{}^{}{\left( 7e^{2x}+\; \frac{5}{2x\; +\; 7}\; 2\; \sqrt{1\; 5x} \right)}dx\)
 \({\ }\int_{}^{}{\left( Ax\; +\; B \right)^{n}}dx\)
 \({\ }\int_{}^{}{x^{2}e^{x^{3}}}dx\)
 \({\ }\int_{}^{}{x^{3}\sqrt{x^{2}3}} dx\)
 \({\ }\int_{}^{}{x^{4}\; e^{x^{5}}}dx\)
 \({\ }\int_{}^{}{3x^{2}\; \left( x^{3}\; +\; 5 \right)^{4}}dx\)
 \({\ }\int_{}^{}{x^{n1}\; \left( 5\; +\; 7x^{n} \right)^{}}dx\)
 \({\ }\int_{}^{}{e^{x}\; \left( e^{x}+2 \right)^{2}}dx\)
 \({\ }\int_{}^{}{x\sqrt{2x^{2}+3}}dx\)
 \({\ }\int_{}^{}{\frac{x^{n1}}{\left( a+bx^{n} \right)^{n}}}dx\)
 \({\ }\int_{}^{}{\frac{x}{x^{2}+a^{2}}}\; dx\)
 \({\ }\int_{}^{}{\frac{x^{2}\; \; x}{x^{3}\; 3x\; +2}}\; dx\)
 \({\ }\int_{}^{}{\frac{1}{x}e^{\log x}}\; dx\)
 \({\ }\int_{}^{}{\frac{1}{x}\left( \log x \right)^{2}e^{\left( \log x \right)^{3}}}\; dx\)
 \({\ }\int_{}^{}{}\frac{dx}{x+\sqrt{x}}\)
 \({\ }\int_{}^{}{\frac{\left( x+1 \right)\; \left( x+\; \log x \right)^{2}}{x}}dx\)
 \({\ }\int_{}^{}{\frac{x}{\sqrt{2x^{2}+3}}}dx\)
 \({\ }\int_{}^{}{\frac{x^{2}}{\sqrt{1x}}}dx\)
Integration
Integrate w.r.t \(x\) [as Antiderivative Process]

\(4x^{3}\; 3x^{2}\; +\; 2x\; +\; 1\)

\(2x^{5}3x^{2}+2\)

\(\left( 2x1 \right)\; \left( x+2 \right)\)

\(\left( 1+x \right)\; \left( 25x \right)\)

\(6e^{x}\; \frac{3}{x^{2}}\)

\(9x^{2}2e^{x}+\frac{1}{x}\)

\(\left( 2x^{2}+1 \right)^{2}4x\)

\( \sqrt{x}\; +\; \sqrt[3]{x}\; \; x^{\frac{3}{5}}\)

\(\frac{\left( 1+x \right)^{2}}{x^{2}}\)

\(\frac{^{\left( x^{3}+5x6 \right)}}{x^{2}}\)

\(\frac{\left( x+1 \right)}{\left( x1 \right)}\)

\(\frac{\left( x^{2}+1 \right)}{\left( x1 \right)}\)

\(\frac{x^{3}}{x1}\)

\(\frac{3x^{2}+x+3}{x^{4}}\)

\(\sqrt{x}\; \; \frac{1}{\sqrt{x}}\)

\(\sqrt{x}\; +\; \frac{1}{\sqrt{x}}\)

\(\left( \sqrt{x}+\frac{1}{\sqrt{x}} \right)^{3}\)

\(\frac{4x^{2}+3x+1}{x+1}\)

\(\frac{ax^{3}+bx+c}{x^{3}}\)

\(\frac{\left( x^{2}4 \right)^{2}}{\sqrt{x}}\)
Scitovsky Paradox
Scitovsky Reversals and the Double Criteria
Granted that the strong Kaldor criteria is lacking in its ability to compare allocations, problems also arise with the weak Kaldor criteria for comparisons of welfare under different types of change. The famous Scitovsky reversal paradox, first identified by Tibor Scitovsky, uncovered an important drawback of the weak Kaldor criterion. Suppose we are in a production economy and suddenly the production conditions change so that, as in Figure below, we move from PPF_{D} to PPF_{F}. In order to judge whether this technological change improved or worsened welfare, we should attempt to compare the corresponding Paretooptimal points D and F represented by the tangencies of CIC_{D} with PPF_{D} and CIC_{F} with PPF_{F}.
However, notice that CIC_{D} and CIC_{F} intersect each other. Specifically, recall that intersecting CICs imply Paretoimprovements: note that F is Paretosuperior to E and E, of course, represents the same level of “aggregate” utility as D as it lies on CIC_{D}. Thus, from D, it is possible to hypothetically redistribute goods and outputs so that we obtain a Paretoimprovement. Thus, according to the weak Kaldor criteria, situation F is superior to D. However, by a reverse argument, we can note that moving from PPF_{F} to PPF_{D}, we can see that D is Paretosuperior to G and G yields the same level of “aggregate” utility as F as it lies on CIC_{F}. Thus, by the weak Kaldor criteria again, situation D is ranked higher than situation F. Thus, there is a “reversal” of rankings between D and F by the weak Kaldor criteria as F is better than D and D is better than F.
Scitovsky (1941) suggested that the resolution to this reversal paradox might be combining both the Hicks and Kaldor criteria. Notice that the movement from D to F fulfills the Kaldor criteria but not the Hicksian one as, from D, it is possible to undertake a hypothetical lumpsum redistribution within PPF_{D} that achieves a Paretoimprovement over F (e.g. a point slightly above G in PPF_{D} is a Paretoimprovement over G and thus over F). Thus, the Scitovky double criteria states that an allocation is preferred to another if it fulfills both the Kaldor and Hicks criteria. This would, it seems, eliminate Scitovsky reversals as that depicted in Figure above. Thus when the two utility possibility curves are nonintersecting and change involves movement from a position on a lower utility possibility curve to a position on a higher utility possibility curve, the change raises social welfare on the basis of the KaldorHicksScitovsky criterion. This occurs only when a change brings about increase in aggregate output or real income.
Arrow’s Impossibility Theorem
Arrow’s Impossibility Theorem
In an attempt to construct a consistent social ranking of a set of alternatives on the basis of individual preferences over this set, Arrow obtained:
1) an impossibility theorem;
2) a generalisation of the framework of welfare economics, covering all collective decisions from political democracy and committee decisions to market allocation; and
3) an axiomatic method which sets a standard of rigour for any future endeavour.
Prof. Arrow pointed out that the construction of social welfare function, which reflects the preferences of all individuals comprising the society, is an impossible task. His main contention is that it is very difficult to set up reasonable democratic procedure for the aggregation of individual preferences into a social preference for making a social choice. Arrow has proved & general theorem according to which it is impossible to construct a social ordering which will in some way reflect the individual ordering of all the members of society.
While constructing his argument, Arrow has maintained that individual’s ordering of social states does not depend exclusively upon the commodities consumed but also on the amounts of various types of collectives such as municipal services, parks, sanitation, erection of statues of famous men, etc. In other words, an individual solely on the basis of her consumption cannot evaluate welfare results of collective activity; instead, individual ordering of social states will depend on her own consumption as well as on the consumption of others in a society. Individual ordering of alternative social states reflects her value judgments, which are also called simply ‘values7 by Arrow. According to him, it is ordering of social states according to the values of individuals as distinct from the individual tastes, which should be determined for the construction of valid social welfare function.
The theorem’s content, somewhat simplified, is as follows: A society needs to agree on a preference order among several different options. Each individual in the society has a particular personal preference order. The problem is to find a general mechanism, called a social choice function, which transforms the set of preference orders, one for each individual, into a global societal preference order. This social choice function should have several desirable (“fair”) properties:
 Unrestricted domain or universality: the social choice function should create a deterministic, complete societal preference order from every possible set of individual preference orders. (The vote must have a result that ranks all possible choices relative to one another, the voting mechanism must be able to process all possible sets of voter preferences, and it should always give the same result for the same votes, without random selection.)
 Nonimposition or citizen sovereignty: every possible societal preference order should be achievable by some set of individual.preference orders. (Every result must be achievable somehow.)
 Nondictatorship:the social choice function should not simply follow the preference order of a single individual while ignoring all others.
 Positive association of social and individual values or Monotonicity: if an individual modifies her preference order by promoting a certain option, then the societal preference order should respond only by promoting that same option or not changing, never by placing it lower than before. (An individual should not be able to hurt an option by ranking it higher.)
 Independence of relevant tenatives: if we restrict attention to a subset of options, and apply the social choice function only to those, then the result should be compatible with the outcome for the whole set of options. (Changes in individuals’ rankings of “irrelevant” alternatives [i.e., ones outside the subset] should have no impact on the societal ranking of the “relevant” subset.)
Arrow examined the problem rigorously by specifying a set of, above requirements that should be satisfied by an acceptable rule for constructing social preferences from individual preferences, which can be simplified as the conditions of social choice as follows:
 Social preferences should be complete in that given a choice between alternatives A and B, it should say whether A is preferred to B, or B is preferred to A or that there is a social indifference between A and B.
 Social preferences should be transitive, which implies, if A is preferred to B, and B is preferred to C, then A is also preferred to C.
 If every individual prefers A to B, then socially A should be preferred to B.
 Social preferences should not depend only upon the preferences of one individual; i.e., the dictator (not in the pejorative sense of the word).
 The last condition asserts that the social preference of A compared to B should be independent of preferences for other alternatives.
According to Arrow, “if we exclude the possibility of interpersonal comparisons of utility, then the only method of passing from individual tastes to social preferences which will be satisfactory and which will be defined for a wide range of sets of individual ordering are either imposed or dictatorial”.
The democratic procedure for reaching a social choice or group decision is the expression of their preferences by individuals through free voting. Social choice will be determined by the majority rule. But Arrow has demonstrated through his impossibility theorem mentioned above that consistent social choices cannot be made without violating the consistency or transitivity condition. The social choice on the basis of majority rule may be inconsistent even if individual preferences are consistent. Arrow first considers a simple case of two alternative social states and proves that in this case group decision or social choice through a majority rule yields a social choice, which can satisfy all the five conditions. But when there are more than two alternatives, majority rule fails to yield a social choice without violating at least one of the five conditions. Thus, Arrow’s theorem says that if the decisionmaking body has at least two members and at least three options to decide among, then it is impossible to design a social choice function that satisfies all these conditions at once.
Various economists have tried to explain Arrow’s impossibility theorem in very different ways, but we will illustrate the proof of the theorem with the help of the table given below.
Figure: Ranking of Alternatives by Individuals and Social Choices
In this table three individuals A, B and C who constitute the society have been shown to have voted for three alternative social states, X, Y and Z by writing 3, against the most preferred alternative, 2 for the next preferred alternative and 1 for the least preferred alternative. As shown in the table, individual A prefers X to Y, Y to Z and therefore X to Z. Individual B prefers Y to Z, Z to X and therefore Y to X. and individual C prefers Z to X, X to Y and therefore Z to Y. It is clear that two individuals A and B prefer Y to Z and also two individuals A and C prefer Z to X. Thus, the majority (two of the three individuals) prefers X to Y and also Y to Z, and therefore, Z to X. But majority also prefers Z to X. Thus, we see that majority rule leads to inconsistent social choices because on the one hand, X has been preferred to Z by the majority and on the other hand, Z has also been preferred to X by majority, which is contradictory or inconsistent.
On the basis of five conditions as mentioned above, Arrow has derived three consequences to explain his impossibility theorem. Let us analyze these consequences in the case of three alternatives X, Y and Z available to the two individuals, A and B. According to Consequence I, whenever the two individuals prefer X to Y, then irrespective of the rank of the third alternative Z, society will prefer X to Y . According to Consequence II, if in a given social choice, the will of individual prevails against the opposition of individual B, then the will of A will certainly prevail in case individual B is different or agrees with A. According to Consequence III, if individuals A and B have exactly conflicting interests in the choice between two alternatives X and Y, then the society will be indifferent between X and Y. It is interesting to note that the simple proof of the impossibility theorem follows from Consequence III. For instance, if individual A prefers X to Y and individual B prefers Y to Z and if society opts for X, then A will be a dictator inasmuch as her choice will always be a social choice. Thus, Arrow’s theorem says that ‘the decisionmaking body has at least two members and at least three options to decide among, then it is impossible to design a social choice function that satisfies all these conditions at once ‘. Arrow, therefore concludes that it is impossible to derive a social ordering of different conceivable alternative social states on the basis of the individual ordering of those social states without violating at least one of the value judgments as expressed in the five conditions of social choices. This is in essence his impossibility theorem.
Game Theory
Key Concepts
A game is any situation in which players (the participants) make strategic decisions—i.e., decisions that take into account each other’s actions and responses.
Payoffs are the value associated with a possible outcome.
A strategy is a rule or plan of action for playing the game.
Optimal strategy is the strategy that maximises a player’s expected payoff.
Cooperative game is a game in which participants can negotiate binding contracts that allow them to plan joint strategies.
Noncooperative game is the game in which negotiation and enforcement of binding contracts are not possible.
An example of a cooperative game is the bargaining between a buyer and a seller over the price of a rug. If the rug costs $100 to produce and the buyer values the rug at $200, a cooperative solution to the game is possible: An agreement to sell the rug at any price between $101 and $199 will maximise the sum of the buyer’s consumer surplus and the seller’s profit, while making both parties better off.
An example of a noncooperative game is a situation in which two competing firms take each other’s likely behaviour into account when independently setting their prices. Each firm knows that by undercutting its competitor, it can capture more market share. But it also knows that in doing so, it risks setting off a price war. Another noncooperative game is the auction mentioned above: Each bidder must take the likely behaviour of the other bidders into account when determining an optimal bidding strategy.
Dominant Strategies
Dominant strategy is the strategy that is optimal no matter what an opponent does.
The following example illustrates this in a duopoly setting. Suppose Firms A and B sell competing products and are deciding whether to undertake advertising campaigns. Each firm will be affected by its competitor’s decision. The possible outcomes of the game are illustrated by the payoff matrix in Table below. Observe that if both firms advertise, Firm A will earn a profit of 10 and Firm B a profit of 5. If Firm A advertises and Firm B does not, Firm A will earn 15 and Firm B zero. The table also shows the outcomes for the other two possibilities.
What strategy should each firm choose? First consider Firm A. It should clearly advertise because no matter what firm B does, Firm A does best by advertising. If Firm B advertises, A earns a profit of 10 if it advertises but only 6 if it doesn’t. If B does not advertise, A earns 15 if it advertises but only 10 if it doesn’t. Thus advertising is a dominant strategy for Firm A. The same is true for Firm B: No matter what firm A does, Firm B does best by advertising. Therefore, assuming that both firms are rational, we know that the outcome for this game is that both firms will advertise. This outcome is easy to determine because both firms have dominant strategies. When every player has a dominant strategy, we call the outcome of the game an equilibrium in dominant strategies.
Unfortunately, not every game has a dominant strategy for each player. To see this, let’s change our advertising example slightly. The payoff matrix in Table 13.2 is the same as in Table 13.1 except for the bottom righthand corner—if neither firm advertises, Firm B will again earn a profit of 2, but Firm A will earn a profit of 20. (Perhaps Firm A’s ads are expensive and largely designed to refute Firm B’s claims, so by not advertising, Firm A can reduce its expenses considerably.)
Now Firm A has no dominant strategy. Its optimal decision depends on what Firm B does. If Firm B advertises, Firm A does best by advertising; but if Firm B does not advertise, Firm A also does best by not advertising. Now suppose both firms must make their decisions at the same time. What should Firm A do?
To answer this, Firm A must put itself in Firm B’s shoes. What decision is best from Firm B’s point of view, and what is Firm B likely to do? The answer is clear: Firm B has a dominant strategy—advertise, no matter what Firm A does. (If Firm A advertises, B earns 5 by advertising and 0 by not advertising; if A doesn’t advertise, B earns 8 if it advertises and 2 if it doesn’t.) Therefore, Firm A can conclude that Firm B will advertise. This means that Firm A should advertise (and thereby earn 10 instead of 6). The logical outcome of the game is that both firms will advertise because Firm A is doing the best it can, given Firm B’s decision; and Firm B is doing the best it can, given Firm A’s decision.
Nash equilibrium
Nash equilibrium is a set of strategies (or actions) such that each player is doing the best it can, given the actions of its opponents. Because each player has no incentive to deviate from its Nash strategy, the strategies are stable.
Dominant Strategies: I’m doing the best I can no matter what you do. You’re doing the best you can no matter what I do.
Nash Equilibrium: I’m doing the best I can, given what you are doing. You’re doing the best you can, given what I am doing.
Note: that a dominant strategy equilibrium is a special case of a Nash equilibrium.
In the advertising game of Table 13.2, there is a single Nash equilibrium—both firms advertise. In general, a game need not have a single Nash equilibrium. Sometimes there is no Nash equilibrium, and sometimes there are several (i.e., several sets of strategies are stable and selfenforcing). A few more examples will help to clarify this.
The Product choice Problem
Consider the following “product choice” problem. Two breakfast cereal companies face a market in which two new variations of cereal can be successfully introduced—provided that each variation is introduced by only one firm. There is a market for a new “crispy” cereal and a market for a new “sweet” cereal, but each firm has the resources to introduce only one new product. The payoff matrix for the two firms might look like the one in Table 13.3.
In this game, each firm is indifferent about which product it produces—so long as it does not introduce the same product as its competitor. If coordination were possible, the firms would probably agree to divide the market. But what if the firms must behave noncooperatively? Suppose that somehow—perhaps through a news release—Firm 1 indicates that it is about to introduce the sweet cereal, and that Firm 2 (after hearing this) announces its plan to introduce the crispy one. Given the action that it believes its opponent to be taking, neither firm has an incentive to deviate from its proposed action. If it takes the proposed action, its payoff is 10, but if it deviates—and its opponent’s action remains unchanged—its payoff will be –5. Therefore, the strategy set given by the bottom lefthand corner of the payoff matrix is stable and constitutes a Nash equilibrium: Given the strategy of its opponent, each firm is doing the best it can and has no incentive to deviate.
Note that the upper righthand corner of the payoff matrix is also a Nash equilibrium, which might occur if Firm 1 indicated that it was about to produce the crispy cereal. Each Nash equilibrium is stable because once the strategies are chosen, no player will unilaterally deviate from them. However, without more information, we have no way of knowing which equilibrium (crispy/sweet vs. sweet/crispy) is likely to result—or if either will result. Of course, both firms have a strong incentive to reach one of the two Nash equilibria—if they both introduce the same type of cereal, they will both lose money. The fact that the two firms are not allowed to collude does not mean that they will not reach a Nash equilibrium. As an industry develops, understandings often evolve as firms “signal” each other about the paths the industry is to take.
Maximin Strategies
The concept of a Nash equilibrium relies heavily on individual rationality. Each player’s choice of strategy depends not only on its own rationality, but also on the rationality of its opponent. This can be a limitation, as the example in Table 13.4 shows.
In this game, two firms compete in selling fileencryption software. Because both firms use the same encryption standard, files encrypted by one firm’s software can be read by the other’s—an advantage for consumers. Nonetheless, Firm 1 has a much larger market share. (It entered the market earlier and its software has a better user interface.) Both firms are now considering an investment in a new encryption standard.
Note that investing is a dominant strategy for Firm 2 because by doing so it will do better regardless of what Firm 1 does. Thus Firm 1 should expect Firm 2 to invest. In this case, Firm 1 would also do better by investing (and earning $20 million) than by not investing (and losing $10 million). Clearly the outcome (invest, invest) is a Nash equilibrium for this game, and you can verify that it is the only Nash equilibrium. But note that Firm 1’s managers had better be sure that Firm 2’s managers understand the game and are rational. If Firm 2 should happen to make a mistake and fail to invest, it would be extremely costly to Firm 1. (Consumer confusion over incompatible standards would arise, and Firm 1, with its dominant market share, would lose $100 million.)
If you were Firm 1, what would you do? If you tend to be cautious—and if you are concerned that the managers of Firm 2 might not be fully informed or rational—you might choose to play “don’t invest.” In that case, the worst that can happen is that you will lose $10 million; you no longer have a chance of los ing $100 million. This strategy is called a maximin strategy because it maximizes the minimum gain that can be earned. If both firms used maximin strategies, the outcome would be that Firm 1 does not invest and Firm 2 does. A maximin strategy is conservative, but it is not profitmaximizing. (Firm 1, for example, loses $10 million rather than earning $20 million.) Note that if Firm 1 knew for certain that Firm 2 was using a maximin strategy, it would prefer to invest (and earn $20 million) instead of following its own maximin strategy of not investing.
MAXIMIZING THE EXPECTED PAYOFF
If Firm 1 is unsure about what Firm 2 will do but can assign probabilities to each feasible action for Firm 2, it could instead use a strategy that maximizes its expected payoff. Suppose, for example, that Firm 1 thinks that there is only a 10percent chance that Firm 2 will not invest. In that case, Firm 1’s expected payoff from investing is (.1)( 100) + (.9)(20) = $8 million. Its expected payoff if it doesn’t invest is (.1)(0) + (.9)( 10) = $9 million. In this case, Firm 1 should invest.
On the other hand, suppose Firm 1 thinks that the probability that Firm 2 will not invest is 30 percent. Then Firm 1’s expected payoff from investing is (.3)(100)+(.7)(20) = $16 million, while its expected payoff from not investing is (.3)(0) + (.7)(10) = $7 million. Thus Firm 1 will choose not to invest.
You can see that Firm 1’s strategy depends critically on its assessment of the probabilities of different actions by Firm 2. Determining these probabilities may seem like a tall order. However, firms often face uncertainty (over market conditions, future costs, and the behaviour of competitors), and must make the best decisions they can based on probability assessments and expected values.
Input Output Analysis
Introduction
In its “static’ version, Professor Leontief’s inputoutput analysis deals with this particular question: “What level of output should each of the n industries in an economy produce, in order that it will just be sufficient to satisfy the total demand for that product?” The rationale for the term inputoutput analysis is quite plain to see. The output of any industry (say, the steel industry) is needed as an input in many other industries, or even for that industry itself; therefore the “correct’ (i.e., shortagefree as well as surplusfree) level of steel output will depend on the input requirements of all the n industries. In turn, the output of many other industries will enter into the steel industry as inputs, and consequently the “correct’ levels of the other products will in turn depend partly upon the input requirements of the steel industry. In view of this interindustry dependence, any set of “correct” output levels for the n industries must be one that is consistent with all the input requirements in the economy. So, inputoutput analysis should be of great use in production planning, such as in planning for the economic development of a country or for a program of national defense. Nevertheless, the problem posed in inputoutput analysis also boils down to one of solving a system of simultaneous equations, and matrix algebra can again be of service.
Assumptions
To simplify the problem, the following assumptions are as a rule adopted:
(1) each industry produces only one homogeneous commodity (broadly interpreted, this does permit the case of two or more jointly produced commodities, provided they are produced in a fixed proportion to one another).
(2) Each industry uses a fixed input ratio (or factor combination) for the production of its output.
(3) Production in every industry is subject to constant returns to scale, so that a kfold change in every input will result in an exactly kfold change in the output.
These assumptions are, of course, unrealistic. From these assumptions we see that, in order to produce each unit of the j^{th} commodity, the input need for the i^{th} commodity must be a fixed amount, which we shall denote by a_{ij}. Specifically, the production of each unit of the j^{th }commodity will require a_{1j} (amount) of the first commodity, a_{2j} of the second commodity,…, and a_{nj} of the n^{th} commodity. (The order of the subscripts in a_{ij} is easy to remember: the first subscript refers to the input, and the second to the output, so that a_{ij}, indicates how much of the i^{th} commodity is used for the production of each unit of the j^{th} commodity.)
For example, we may assume prices to be given and, thus, adopt “a dollar’s worth’ of each commodity as its unit. Then the statement a_{32}= 0.35 will mean that 35 cents’ worth of the third commodity is required as an input for producing a dollar’s worth of the second commodity. The a_{ij} symbol will be referred to as an input coefficient.
For an nindustry economy, the input coefficients can be arranged into a matrix A = [a_{ij}], as in Table below, in which each column specifies the input requirements for the production of one unit of the output of a particular industry. The Second column, for example, states that to produce a unit (a dollar’s worth) of commodity II, the inputs needed are: a_{12} units of commodity I, a_{22} units of commodity II, etc. If no industry uses its own product as an input, then the elements in the principal diagonal of matrix A will all be zero.
To be Continued….
The twopart tariff
The twopart tariff is related to price discrimination and provides another means of extracting consumer surplus. It requires consumers to pay a fee up front for the right to buy a product. Consumers then pay an additional fee for each unit of the product they wish to consume. The classic example of this strategy is an amusement park. You pay an admission fee to enter, and you also pay a certain amount for each ride. The owner of the park must decide whether to charge a high entrance fee and a low price for the rides or, alternatively, to admit people for free but charge high prices for the rides.
Example: The twopart tariff has been applied in many settings: tennis and golf clubs (you pay an annual membership fee plus a fee for each use of a court or round of golf); the rental of large mainframe computers (a flat monthly fee plus a fee for each unit of processing time consumed); telephone service (a monthly hookup fee plus a fee for minutes of usage). The strategy also applies to the sale of products like safety razors (you pay for the razor, which lets you consume the blades that fit that brand of razor).
The problem for the firm is how to set the entry fee (which we denote by T) versus the usage fee (which we denote by P). Assuming that the firm has some market power, should it set a high entry fee and low usage fee, or vice versa? To solve this problem, we need to understand the basic principles involved.
SINGLE CONSUMER CASE
Let’s begin with the artificial but simple case illustrated in Figure below. Suppose there is only one consumer in the market (or many consumers with identical demand curves). Suppose also that the firm knows this consumer’s demand curve. Now, remember that the firm wants to capture as much consumer surplus as possible. In this case, the solution is straight forward: Set the usage fee P equal to marginal cost and the entry fee T equal to the total consumer surplus for each consumer. Thus, the consumer pays T* (or a bit less) to use the product, and P* = MC per unit consumed. With the fees set in this way, the firm captures all the consumer surplus as its profit.
TWO CONSUMERS CASE
Now suppose that there are two different consumers (or two groups of identical consumers). The firm, however, can set only one entry fee and one usage fee. It would thus no longer want to set the usage fee equal to marginal cost. If it did, it could make the entry fee no larger than the consumer surplus of the consumer with the smaller demand (or else it would lose that consumer), and this would not yield a maximum profit. Instead, the firm should set the usage fee above marginal cost and then set the entry fee equal to the remaining consumer surplus of the consumer with the smaller demand. Figure below illustrates this. With the optimal usage fee at P* greater than MC, the firm’s profit is 2T* + (P*−MC)(Q_{1}+Q_{2}). (There are two consumers, and each pays T*.) You can verify that this profit is more than twice the area of triangle ABC, the consumer surplus of the consumer with the smaller demand when P=MC. To determine the exact values of P* and T*, the firm would need to know (in addition to its marginal cost) the demand curves D_{1} and D_{2}. It would then write down its profit as a function of P and T and choose the two prices that maximize this function.
MANY CONSUMERS CASE
Most firms, however, face a variety of consumers with different demands. Unfortunately, there is no simple formula to calculate the optimal twopart tariff in this case, and some trialanderror experiments might be required. But there is always a tradeoff: A lower entry fee means more entrants and thus more profit from sales of the item. On the other hand, as the entry fee becomes smaller and the number of entrants larger, the profit derived from the entry fee will fall. The problem, then, is to pick an entry fee that results in the optimum number of entrants—that is, the fee that allows for maximum profit. In principle, we can do this by starting with a price for sales of the item P, finding the optimum entry fee T, and then estimating the resulting profit. The price P is then changed, and the corresponding entry fee calculated, along with the new profit level. By iterating this way, we can approach the optimal twopart tariff.
Figure above illustrates this principle. The firm’s profit π is divided into two components, each of which is plotted as a function of the entry fee T, assuming a fixed sales price P. The first component, π_{a} , is the profit from the entry fee and is equal to the revenue n(T)T, where n(T) is the number of entrants. (Note that a high T implies a small n.) Initially, as T is increased from zero, revenue n(T)T rises. Eventually, however, further increases in T will make n so small that n(T)T falls. The second component, π_{s} , is the profit from sales of the item itself at price P and is equal to (P − MC)Q, where Q is the rate at which entrants purchase the item. The larger the number of entrants n, the larger Q will be. Thus π_{s} falls when T is increased because a higher T reduces n.
Starting with a number for P, we determine the optimal (profitmaximizing) T*. We then change P, find a new T*, and determine whether profit is now higher or lower. This procedure is repeated until profit has been maximized.
Obviously, more data are needed to design an optimal twopart tariff than to choose a single price. Knowing marginal cost and the aggregate demand curve is not enough. It is impossible (in most cases) to determine the demand curve of every consumer, but one would at least like to know by how much individual demands differ from one another. If consumers’ demands for your product are fairly similar, you would want to charge a price P that is close to marginal cost and make the entry fee T large. This is the ideal situation from the firm’s point of view because most of the consumer surplus could then be captured. On the other hand, if consumers have different demands for your product, you would probably want to set P well above marginal cost and charge a lower entry fee T. In that case, however, the twopart tariff is a less effective means of capturing consumer surplus; setting a single price may do almost as well.
At Disneyland in California and Walt Disney World in Florida, the strategy is to charge a high entry fee and charge nothing for the rides. This policy makes sense because consumers have reasonably similar demands for Disney vacations. Most people visiting the parks plan daily budgets (including expenditures for food and beverages) that, for most consumers, do not differ very much.
Firms are perpetually searching for innovative pricing strategies, and a few have devised and introduced a twopart tariff with a “twist”—the entry fee T entitles the customer to a certain number of free units. For example, if you buy a Gillette razor, several blades are usually included in the package. The monthly lease fee for a mainframe computer usually includes some free usage before usage is charged. This twist lets the firm set a higher entry fee T without losing as many small customers. Because these small customers might pay little or nothing for usage under this scheme, the higher entry fee will capture their surplus without driving them out of the market, while also capturing more of the surplus of the large customers.
Hypothesis Testing [Procedure]
Mathematics
No Comments
PROCEDURE FOR HYPOTHESIS TESTING
To test a hypothesis means to tell (on the basis of the data the researcher has collected) whether or not the hypothesis seems to be valid. In hypothesis testing the main question is: whether to accept the null hypothesis or not to accept the null hypothesis? Procedure for hypothesis testing refers to all those steps that we undertake for making a choice between the two actions i.e., rejection and acceptance of a null hypothesis. The various steps involved in hypothesis testing are stated below:
(i) Making a formal statement: The step consists in making a formal statement of the null hypothesis (H ) and also of the alternative hypothesis (H ). This means that hypotheses should be clearly stated, considering the nature of the research problem. For instance, Mr. Mohan of the Civil Engineering Department wants to test the load bearing capacity of an old bridge which must be more than 10 tons, in that case he can state his hypotheses as under:
Null hypothesis H0 : μ = 10 tons
Alternative Hypothesis Ha : μ > 10 tons
Take another example. The average score in an aptitude test administered at the national level is 80. To evaluate a state’s education system, the average score of 100 of the state’s students selected on random basis was 75. The state wants to know if there is a significant difference between the local scores and the national scores. In such a situation the hypotheses may be stated as under:
Null hypothesis H0 : μ = 80
Alternative Hypothesis Ha : μ ≠ 80
The formulation of hypotheses is an important step which must be accomplished with due care in accordance with the object and nature of the problem under consideration. It also indicates whether we should use a onetailed test or a twotailed test. If H_{a} is of the type greater than (or of the type a lesser than), we use a onetailed test, but when H_{a} is of the type “whether greater or smaller” then a we use a twotailed test.
(ii) Selecting a significance level: The hypotheses are tested on a predetermined level of significance and as such the same should be specified. Generally, in practice, either 5% level or 1% level is adopted for the purpose. The factors that affect the level of significance are: (a) the magnitude of the difference between sample means; (b) the size of the samples; (c) the variability of measurements within samples; and (d) whether the hypothesis is directional or nondirectional (A directional hypothesis is one which predicts the direction of the difference between, say, means). In brief, the level of significance must be adequate in the context of the purpose and nature of enquiry.
(iii) Deciding the distribution to use: After deciding the level of significance, the next step in hypothesis testing is to determine the appropriate sampling distribution. The choice generally remains between normal distribution and the tdistribution. The rules for selecting the correct distribution are similar to those which we have stated earlier in the context of estimation.
(iv) Selecting a random sample and computing an appropriate value: Another step is to select a random sample(s) and compute an appropriate value from the sample data concerning the test statistic utilising the relevant distribution. In other words, draw a sample to furnish empirical data.
(v) Calculation of the probability: One has then to calculate the probability that the sample result would diverge as widely as it has from expectations, if the null hypothesis were in fact true.
(vi) Comparing the probability: Yet another step consists in comparing the probability thus calculated with the specified value for α , the significance level. If the calculated probability is equal to or smaller than the α value in case of onetailed test (and α/2 in case of twotailed test), then reject the null hypothesis (i.e., accept the alternative hypothesis), but if the calculated probability is greater, then accept the null hypothesis. In case we reject H_{o }we run a risk of(at most the level of significance), committing an error of Type I, but if we accept H_{o}, then we run some risk (the size of which cannot be specified as long as the H_{o} happens to be vague rather than specific) of committing an error of Type II.