We are a blog

My photo

Hi, my name is Biju P R. I am a writer, teacher and academic blogger. Anything that comes through society and technology interest me. My blog posts here define what am I doing here. Please just check it out.

Share this Blog

Monday, August 15, 2011

Inductive reasoning

Induction, also known as inductive reasoning or inductive logic, is a type of reasoning that involves moving from a set of specific facts to a general conclusion. It can also be seen as a form of theory-building, in which specific facts are used to create a theory that explains relationships between the facts and allows prediction of future knowledge. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; i.e. they do not ensure its truth. Induction is used to ascribe properties or relations to types based on an observation instance (i.e., on a number of observations or experiences); or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is employed, for example, in using specific propositions such as:

This ice is cold. (or: All ice I have ever touched was cold.)

This billiard ball moves when struck with a cue. (or: Of one hundred billiard balls struck with a cue, all of them moved.)

...to infer general propositions such as:

All ice is cold.

All billiard balls move when struck with a cue.

Another example would be:

3+5=8 and eight is an even number. Therefore, an odd number added to another odd number will result in an even number.

Note that mathematical induction is not a form of inductive reasoning. While mathematical induction may be inspired by the non-base cases, the formulation of a base case firmly establishes it as a form of deductive reasoning.

Strong and Weak Induction

Strong induction

All observed crows are black.


All crows are black.

This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we can systematically falsify the possibility of crows of another colour, the statement (conclusion) may actually be false.

For example, one could examine the bird's genome and learn whether it is capable of producing a differently coloured bird. In doing so, we could discover that albinism is possible, resulting in light-coloured crows. Even if you change the definition of "crow" to require blackness, the original question of the colour possibilities for a bird of that species would stand, only semantically hidden.

A strong induction is thus an argument in which the truth of the premises would make the conclusion probable, but not necessarily guarantee it as being factual.

Weak induction

I always hang pictures on nails.


All pictures hang from nails.

Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalisation that "All pictures hang from nails". However, the link between the premise and the inductive conclusion is weak. No reason exists to believe that just because one person hangs pictures on nails that there are no other ways for pictures to be hung, or that other people cannot do other things with pictures. Indeed, not all pictures are hung from nails; moreover, not all pictures are hung. The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralisations.

Many speeding tickets are given to teenagers.


All teenagers drive fast.

In this example, the premise is built upon a certainty; however, it is not one that leads to the conclusion. Not every teenager observed has been given a speeding ticket. In other words, unlike "The sun rises every morning", there are already plenty of examples of teenagers not being given speeding tickets. Therefore the conclusion drawn is false. Moreover, when the link is weak, the inductive logic does not give us a strong conclusion. In both of these examples of weak induction, the logical means of connecting the premise and conclusion (with the word "therefore") are faulty, and do not give us a strong inductively reasoned statement.



-a theory that theology and metaphysics are earlier imperfect modes of knowledge and that positive knowledge is based on natural phenomena and their properties and relations as verified by the empirical sciences

Positivism is a theory of knowledge according to which the only kind of sound knowledge available to humankind is that of science grounded in observation. Positivism is a unity of science thesis according to which all sciences can be integrated into a single natural system.

In a positivist view of the world, science was seen as the way to get at truth, to understand the world well enough so that we might predict and control it. The world and the universe were deterministic -- they operated by laws of cause and effect that we could discern if we applied the unique approach of the scientific method. Science was largely a mechanistic or mechanical affair. We use deductive reasoning to postulate theories that we can test. Based on the results of our studies, we may learn that our theory doesn't fit the facts well and so we need to revise our theory to better predict reality. The positivist believed in empiricism -- the idea that observation and measurement was the core of the scientific endeavor. The key approach of the scientific method is the experiment, the attempt to discern natural laws through direct manipulation and observation.

-a trend in bourgeois philosophy which declares natural (empirical) sciences to be the sole source of true knowledge and rejects the cognitive value of philosophical study.

Positivism emerged in response to the inability of speculative philosophy (e.g. Classical German Idealism) to solve philosophical problems which had arisen as a result of scientific development. Positivists went to an opposite extreme and rejected theoretical speculation as a means of obtaining knowledge.

Positivism declared false and senseless all problems, concepts and propositions of traditional philosophy on being, substances, causes., etc., that could not be solved or verified by experience due to a high degree of abstract nature.

Positivism claims to be a fundamentally new, non-metaphysical ("positive") philosophy, modelled on empirical sciences and providing them with a methodology. Positivism is essentially empiricism brought to extreme logical consequences in certain respects: inasmuch as any knowledge is empirical knowledge in one form or another, no speculation can be knowledge.

Positivism was founded by Auguste Comte-

-who introduced the term "positivism", Historically, there are three stages in the development of positivism.

The exponents of the first were Comte, E. Littré and P. Laffitte in France, J S Mill and Herbert Spencer in England. Alongside the problems of the theory of knowledge (Comte) and logic (Mill), the main place in the first Positivism was assigned to sociology (Comte's idea of transforming society on the basis of science, Spencer's organic theory of society).

The rise of the second stage in Positivism - empirio-criticism - dates back to the 1870s - 1890s and is associated with Ernst Mach and Avenarius, who renounced even formal recognition of objective real objects, which was a feature of early Positivism. In Machism, the problems of cognition were interpreted from the viewpoint of extreme psychologism, which was merging with subjectivism.

The rise and formation of the latest Positivism, or neo-positivism, is linked up with the activity of the Vienna Circle (O. Neurath, Carnap, Schlick, Frank and others) and of the Berlin Society for Scientific Philosophy (Reichenbach and others), which combined a number of trends: logical atomism, logical positivism, semantics (close to these trends are Percy Bridgman's operationalism and the pragmatism of William James et al). The main place in the third positivism is taken by the philosophical problems of language, symbolic logic, the structure of scientific investigations, and others. Having renounced psychologism, the exponents of the third positivism took the course of reconciling the logic of science with mathematics, the course of formalisation of epistemological problems.

It is very difficult to gain a clear understanding of positivism because of the number of ways in which the term has been defined and interpreted by many of its supporters and critics. It is, however, safe to say that an important goal of positivism was objectivity.

The law of three stages of Comte suggests that he used the term ‘positive’ to mean ‘scientific’. His assertion was that scientific inquiry must be empirical; it should be based on the observation of facts and not on religion which created mystery about the world, or metaphysics which was of no practical value.

The methods and laws applied to natural sciences can be equally applied to social sciences.the basis of such logic can be found in the enlightenment era.God has been replaced by reason.Comte in Positive Philosophy elaborated this idea in the study of social world.

Like the law of gravitation why apple falls down,the social world too could be understood with a cause and effect analysis.The reason why some workers get more performance in job than others.

In its broadest sense, positivism is a rejection of metaphysics. It is a position that holds that the goal of knowledge is simply to describe the phenomena that we experience. The purpose of science is simply to stick to what we can observe and measure. Knowledge of anything beyond that, a positivist would hold, is impossible.

But Karl Popper deny this approach.He says to understand the high performance of workers because of job satisfaction we may observe job satisfaction linked to work performance.Repeated observation in ten,fifteen or twenty five woul d result in finding that ther are atleast one worker who is dissatisfied with job even outperforms those who are satisfied.The need to do is that instead of verifying what we already know ,let us falsify it.Popper shifts to hypothetico-deductive method.So in searching job satisfaction,we neeed to search up on pay,skill,training,democracy in work places, etc,.

To Comte,

Positivism was ‘scientific’ because knowledge had practical value and the growth of science was for the benefit of humankind.

To him, it was ‘empiricist’ as only humans could experience it.

It was ‘encyclopaedic’ because all the sciences came under a single system of natural laws.

And it was ‘progressivist’ because social stability could be restored by re-establishing a moral order, based on scientific knowledge, not on religion which made the world mysterious and prevented empirical inquiry, or metaphysical speculations which had no practical value.

In Comte’s view, there were four enemies of the positive philosophy: religion (as a dogma not as a moral force), metaphysics (in which he included psychology), individualism (which to him was the cause of social disorder) and revolutionary utopianism.

The core assumptions of positivism include these:

-that social science is identical in its logic to natural science;

-that science involves the search for general laws about empirical phenomena;

-and that discovery and explanation depend upon a rigorous empirical scrutiny of the phenomena under question.

Positivism is doubtful about the role of theory, preferring instead to make do with empirical observations, classes of empirical phenomena, and generalizations across classes of phenomena.

Finally, positivism is dubious about the reality of causal connections between empirical phenomena.

So the basic premises of positivism are-

1.we seek to identify processes of cause and effect to explain phenomena,

2.knowledege should be based on what can be tested by observation of tangible evidence

3.researchers should use scientific method which emphasis control,standardization and objectivity.

Positivism has not escaped the lot of traditional philosophy, since its own propositions (rejection of speculation, phenomenalism, etc.) turned out to be unverifiable by experience and, consequently, metaphysical.

Lecture notes prepared by Biju P R,Assistant Professor in Political Science,Govt Brennen College Thalassery

Research designs

Research designs are concerned with turning the research question into a testing project. The best design depends on your research questions. Every design has its positive and negative sides. The research design has been considered as a "blueprint" for research, dealing with at least four problems: what questions to study, what data are relevant, what data to collect, and how to analyze the results.

Research design can be divided into fixed and flexible research designs (Robson, 1993). Others have referred to this distinction with ‘quantitative research designs’ and ‘qualitative research designs’. However, fixed designs need not be quantitative, and flexible design need not be qualitative. In fixed designs the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise it’s impossible to know in advance which variables need to be controlled and measured. Often these variables are quantitative. Flexible designs allow for more freedom during the data collection. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, theory might not be available before one starts the research.

Descriptive research

Although some people dismiss descriptive research as `mere description', good description is fundamental to the research enterprise and it has added immeasurably to our knowledge of the shape and nature of our society. Descriptive research encompasses much government sponsored research including the population census, the collection of a wide range of social indicators and economic information such as household expenditure patterns, time use studies, employment and crime statistics and the like. Descriptions can be concrete or abstract. A relatively concrete description might describe the ethnic mix of a community, the changing age profile of a population or the gender mix of a workplace. Alternatively the description might ask more abstract questions such as `Is the level of social inequality increasing or declining?', `How secular is society?' or `How much poverty is there in this community?' Accurate descriptions of the level of unemployment or poverty have historically played a key role in social policy reforms (Marsh, 1982). By demonstrating the existence of social problems, competent description can challenge accepted assumptions about the way things are and can provoke action.Good description provokes the `why' questions of explanatory research. If we detect greater social polarization over the last 20 years (i.e. the rich are getting richer and the poor are getting poorer) we are forced to ask `Why is this happening?' But before asking `why?' we must be sure about the fact and dimensions of the phenomenon of increasing polarization. It is all very well to develop elaborate theories as to why society might be more polarized now than in the recent past, but if the basic premise is wrong (i.e. society is not becoming more polarized) then attempts to explain a non-existent phenomenon are silly. Of course description can degenerate to mindless fact gathering or what C.W. Mills (1959) called `abstracted empiricism'. There are plenty of examples of unfocused surveys and case studies that report trivial information and fail to provoke any `why' questions or provide any basis for generalization. However, this is a function of inconsequential descriptions rather than an indictment of descriptive research itself.

Explanatory research

Explanatory research focuses on why questions. For example, it is one thing to describe the crime rate in a country, to examine trends over time or to compare the rates in different countries. It is quite a different thing to develop explanations about why the crime rate is as high as it is, why some types of crime are increasing or why the rate is higher in some countries than in others. The way in which researchers develop research designs is fundamentally affected by whether the research question is descriptive or explanatory. It affects what information is collected. For example, if we want to explain why some people are more likely to be apprehended and convicted of crimes we need to have hunches about why this is so. We may have many possibly incompatible hunches and will need to collect information that enables us to see which hunches work best empirically. Answering the `why' questions involves developing causal explanations. Causal explanations argue that phenomenon Y (e.g. income level) is affected by factor X (e.g. gender). Some causal explanations will be simple while others will be more complex. For example, we might argue that there is a direct effect of gender on income (i.e. simple gender discrimination) (Figure 1.1a). We might argue for a causal chain, such as that gender affects choice of ®eld of training which in turn affects

Experimental Design

Experimental designs are often touted as the most "rigorous" of all research designs or, as the "gold standard" against which all other designs are judged. In one sense, they probably are. If you can implement an experimental design well (and that is a big "if" indeed), then the experiment is probably the strongest design with respect to internal validity. Why? Recall that internal validity is at the center of all causal or cause-effect inferences. When you want to determine whether some program or treatment causes some outcome or outcomes to occur, then you are interested in having strong internal validity. Essentially, you want to assess the proposition:

If X, then Y

or, in more colloquial terms:

If the program is given, then the outcome occurs

Unfortunately, it's not enough just to show that when the program or treatment occurs the expected outcome also happens. That's because there may be lots of reasons, other than the program, for why you observed the outcome. To really show that there is a causal relationship, you have to simultaneously address the two propositions:

If X, then Y


If not X, then not Y

Or, once again more colloquially:

If the program is given, then the outcome occurs


If the program is not given, then the outcome does not occur

If you are able to provide evidence for both of these propositions, then you've in effect isolated the program from all of the other potential causes of the outcome. You've shown that when the program is present the outcome occurs and when it's not present, the outcome doesn't occur. That points to the causal effectiveness of the program.

Think of all this like a fork in the road. Down one path, you implement the program and observe the outcome. Down the other path, you don't implement the program and the outcome doesn't occur. But, how do we take both paths in the road in the same study? How can we be in two places at once? Ideally, what we want is to have the same conditions -- the same people, context, time, and so on -- and see whether when the program is given we get the outcome and when the program is not given we don't. Obviously, we can never achieve this hypothetical situation. If we give the program to a group of people, we can't simultaneously not give it! So, how do we get out of this apparent dilemma?

Perhaps we just need to think about the problem a little differently. What if we could create two groups or contexts that are as similar as we can possibly make them? If we could be confident that the two situations are comparable, then we could administer our program in one (and see if the outcome occurs) and not give the program in the other (and see if the outcome doesn't occur). And, if the two contexts are comparable, then this is like taking both forks in the road simultaneously! We can have our cake and eat it too, so to speak.

That's exactly what an experimental design tries to achieve. In the simplest type of experiment, we create two groups that are "equivalent" to each other. One group (the program or treatment group) gets the program and the other group (the comparison or control group) does not. In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on. Now, if we observe differences in outcomes between these two groups, then the differences must be due to the only thing that differs between them -- that one got the program and the other didn't.

OK, so how do we create two groups that are "equivalent"? The approach used in experimental design is to assign people randomly from a common pool of people into the two groups. The experiment relies on this idea of random assignment to groups as the basis for obtaining two groups that are similar. Then, we give one the program or treatment and we don't give it to the other. We observe the same outcomes in both groups.

The key to the success of the experiment is in the random assignment. In fact, even with random assignment we never expect that the groups we create will be exactly the same. How could they be, when they are made up of different people? We rely on the idea of probability and assume that the two groups are "probabilistically equivalent" or equivalent within known probabilistic ranges.

So, if we randomly assign people to two groups, and we have enough people in our study to achieve the desired probabilistic equivalence, then we may consider the experiment to be strong in internal validity and we probably have a good shot at assessing whether the program causes the outcome(s).

But there are lots of things that can go wrong. We may not have a large enough sample. Or, we may have people who refuse to participate in our study or who drop out part way through. Or, we may be challenged successfully on ethical grounds (after all, in order to use this approach we have to deny the program to some people who might be equally deserving of it as others). Or, we may get resistance from the staff in our study who would like some of their "favorite" people to get the program. Or, they mayor might insist that her daughter be put into the new program in an educational study because it may mean she'll get better grades.

The bottom line here is that experimental design is intrusive and difficult to carry out in most real world contexts. And, because an experiment is often an intrusion, you are to some extent setting up an artificial situation so that you can assess your causal relationship with high internal validity. If so, then you are limiting the degree to which you can generalize your results to real contexts where you haven't set up an experiment. That is, you have reduced your external validity in order to achieve greater internal validity.

In the end, there is just no simple answer (no matter what anyone tells you!). If the situation is right, an experiment can be a very strong design to use. But it isn't automatically so. My own personal guess is that randomized experiments are probably appropriate in no more than 10% of the social research studies that attempt to assess causal relationships.

Experimental design is a fairly complex subject in its own right. I've been discussing the simplest of experimental designs -- a two-group program versus comparison group design. But there are lots of experimental design variations that attempt to accomplish different things or solve different problems. In this section you'll explore the basic design and then learn some of the principles behind the major variations.