1 / 36

Generalizing and Describing

Generalizing and Describing. A generalization, as a term used in critical thinking, is a statement that attributes some characteristic to all, most, or some members of a given set. For example: Many students work full time. Twenty-five percent of Americans believe in astrology.

eytan
Download Presentation

Generalizing and Describing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generalizing and Describing • A generalization, as a term used in critical thinking, is a statement that attributes some characteristic to all, most, or some members of a given set. For example: • Many students work full time. • Twenty-five percent of Americans believe in astrology. • A generalized description is a blanket statement based on information about every member of a group. We say. • “All my compact discs are from the ’90’s,” or • Every women I know is good at math. • We feel safe in making these broad statements because they cover every person and thing we’ve mentioned.

  2. Inductive Generalization • An inductive generalization is an argument in which a generalization is claimed to be probably true on the basis of information about a particular class. Here are three examples: • Six months ago I met a rancher from Montana, and she was friendly. • Two months ago I met a waiter from Montana, and he was friendly. • One week ago I met a student from Montana, and she was friendly. • I guess most people from Montana are friendly.

  3. Inductive Generalization II • Another example: • All dinosaur bones so far discovered have been more than 65 million years old. • Therefore, probably all dinosaur bones are more than 65 million years old. • Finally, • Most Republicans I know are conservative. • Therefore, most Republicans are conservative.

  4. Generalizing • A number of claims we make are generalizations that we think hold true. For instance, we see one cat, then another, and we recognize features they share in common. When we have seen enough of them we realize they belong together. We reach the conclusion that “All creatures called ‘cats’ are four-legged, carnivorous, furry mammals, usually domesticated, standing about 1 foot tall,” and so forth. • Having abstracted the characteristics they have in common, we form a generalization that holds true for all cats.

  5. Generalizing- from M&P • What do these statements have in common? • “I avoid philosophy courses. They are too abstract for me.” • “5% of American males at age 20 are taller than 6’3”.” • They are both GENERAL statements, statements about an entire group or “population” of things. • One is a general statement about philosophy classes: “They” are too difficult for me. • The other is a general statement about American males at age 20: 5% of them are taller than 6’3’’.

  6. How, logically, does one support a general statement? • Through GENERALIZING! • “Generalizing” = arriving at a conclusion about an entire group by considering a finite subset of that group. • It isn’t practical to measure the height of every male. • So we generalize from a subset or “SAMPLE”: a sample of males; a sample consisting of the philosophy courses I have taken. • Scientific sampling: a branch of statistics.

  7. Scientific sampling: a branch of statistics. • Much scientific knowledge requires scientific sampling. • How do we know if a vaccine works? • Can we try it on everyone? How do we go about determining if it is relatively safe?

  8. Note: the fallacy of hasty generalization • Do you remember what this was from the fallacy section? • In this case, it is overestimating the strength of an argument based on a small sample. You may have too high of a confidence level regarding your sample. • Fallacy of anecdotal evidence- my grandpa drank scotch everyday and he lived to be 90 years old. • Fallacy of biased generalizing – Fox news polls, Lou Dobbs polls, and so forth.

  9. Scientific sampling lets us know about large populations • Health statistics • Public opinion • Marketing research/product preferences • Quality control in manufacturing • Climate research, e.g., global warming

  10. Some terms you need to know: • Sample • Target Population • The Feature • The finite subset of the group is called the “SAMPLE”; the entire group is the “TARGET POPULATION” (or just “target” or sometimes just “the population.”) • Again, GENERALIZING is arriving at a conclusion about an entire group or “population” by considering a finite subset of that group.

  11. Which would you rather be? • Rich and stupid • Poor and smart • Poor and smart? Or rich and stupid? • Let us generalize… • Can we take the answers in this class and make a generalization about IPFW in general? • Why or why not? • The feature refers to x % of this class prefers to be fill in the blank. The fill in the blank is ‘the feature.’

  12. More ch. 10 M& P • We want the sample to have the same RELATED VARIABLES as the target population, and in the same proportion. • A sample that has a RELATED VARIABLE in a proportion not found in the target population is ATYPICAL or “BIASED.” • Do we have a fair proportion of freshmen, sophomores, grad students, and so forth? • Do we have the right mix of business majors and liberal arts majors?

  13. Generalizing II • Isn’t generalizing the same thing as stereotyping? Only if each member is treated as typical and assumed to possess all the group’s features. Each person should be treated as an individual even though he or she will probably exhibit some characteristics of the group. • Can we generalize from one instance? That depends on the case. From touching a rose’s thorn, we can determine that thorns on a rose are sharp. From dropping a pencil to the ground, we can determine that gravity always pulls objects downward.

  14. Generalizing III • Generalizing is unavoidable. To say, You can’t generalize,” is to make a generalization. • Hegel wrote, “An idea is always a generalization, and generalization is a property of thinking. To generalize means to think.” • Samuel Butler wrote, “Life is the art of drawing sufficient conclusions from insufficient premises.” • We generalize through the lessons we draw from experience. Since we must generalize the trick is to do it well. • The main problem in generalizing is to figure out how to achieve reliability. What percentage of a group must be examined for us to feel secure about a generalization in our argument? Which members should we use as a representative cross section?

  15. Using a fair sample • Size. The number of cases we examine should be large enough to represent the whole. One way to judge how many that should be is to look at what we are generalizing about. For some things we will need a large sample, for others we will only need a few cases. • Example 1: • The coffee in that pot is lousy-I just had a cup. • There is a high probability that you will not like a second cup of coffee from the same pot.

  16. Using a fair sample II • Example 2: • Cocker spaniels are nice dogs, but they eat like pigs. When I was a kid, we had this little cocker that ate more than a big collie we had. • One cocker is not enough to base a generalization about a whole breed of dogs. • If we are talking about the taste of pepper, a few grains would be sufficient for reaching a general conclusion. • If we want to generalize about the amount of pepper used in the average American household, we would have to conduct a large survey all across America.

  17. Fair Sample III • If we want to generalize in an argument we need a large enough sample on which to base it. To determine whether the sample is sufficiently large we need to see what the generalization is about. • We may not always know the subject well enough to determine in advance whether a large or small sample is needed. We may know that a generalization about the hardness of diamonds calls for just a few cases and the use of pepper may require a large sample, but when it comes to birds, vertebrates and beetles we may not be so sure.

  18. Fair Sample IV • We may not know that there are 14,000 varieties of birds, 40,000 kinds of vertebrates, and 180,000 species of beetles and that the sample size would have to be huge. • In these cases we may be able to use another method to determine the ideal size of the sample. We can increase the sample size until the results begin to repeat themselves. Then we can stop, knowing we have examined enough cases.

  19. Fair Sample V • The traditional example of this is the marble experiment. Suppose that we want to know the percentage of red, black, and clear marbles in a jar. We would first reach inside for a handful of the marbles and, let’s say, come up with 30 percent red, 40 percent black, and 30 percent clear. We then put the marbles back in the jar and shake it up really well. Maybe this time we count 40 percent red, 50 percent black, and 10 percent clear. We keep putting the marbles back in the jar and shaking it very well until the same percentages keep showing up. When this happens we know we have gone far enough. We have probably eliminated the errors in our sample and are getting accurate results.

  20. Fair Sample VI • This method is the most reliable for determining whether our generalization is based on an adequate sample size. Rather than speculating on the proper size based on the nature of the subject, we should use this method whenever possible.

  21. Simpson’s Paradox • Moore and Parker: There are many ways that statistical reports can be misleading. The following illustrates one of the stranger ways, known to statisticians as “Simpson’s Paradox”: • Let’s say you need a fairly complicated but still routine operation and you have to pick one of the two local hospitals, Mercy or Saint Simpson’s, for the surgery. You decide to pick the safer of the two, based on their records for patient survival during surgery. You get the numbers: Mercy has 2,100 surgery patients die in a year, of which 63 die-a 3 percent death rate. Saint Simpson’s has 800 surgery patients, of whom 16 die- a 2 percent death rate. You decide it’s safer to have your operation done at Saint Simpson’s.

  22. Simpson’s Paradox II • The fact is, you could be actually more likely to die at Saint Simpson’s than at Mercy Hospital, despite the former hospital’s lower death rate for surgery patients. But you would have no way of knowing this without learning some more information. In particular, you need to know how the total figures break down into smaller, highly significant categories. • Consider the categories of high-risk patients (older patients, victims of trauma) and low-risk patients (e.g., those who arrive in good condition for elective surgery). Saint Simpson’s may have the better looking overall record, not because it performs better, but because Saint Simpson’s gets a higher proportion of low-risk patients than does Mercy.

  23. Simpson’s Paradox III • Let’s say Mercy had a death rate of 3.8 percent among 1,500 high-risk patients, whereas Saint Simpson’s, with 200 high-risk patients, had a death rate of 4 percent. Mercy and Saint Simpson’s each had 600 low-risk patients, with a 1 percent death rate at Mercy and a 1.3 percent rate at Saint Simpson’s. • So, as it turns out, it’s a safer bet going to Mercy Hospital whether you’re high risk or low, even though Saint Simpson’s has the lower overall death rate. • The moral of the story is to be cautious about accepting the interpretation that is attached to a set of figures, especially if they lump together several categories of the thing being studied.

  24. Randomness • In addition to making sure we have a large enough sample size, we must also make sure we have enough randomness. In other words, we must make sure that the sample studied represents the whole and does not bias our conclusion. We want to avoid “loading” the sample in favor of a particular result but five every member of the class an equal chance of being chosen. • For example, to avoid bias in a generalization about how the public feels about legalizing marijuana, we should not just sample college students because they may be more pro-legalization and not represent the majority of the public.

  25. Randomness II • We can be biased because of the prejudices we bring to our investigation, but also because of more subtle psychological factors. If we buy a blue Blazer we are amazed at all of the blue Blazer’s that are out on the road. • However, we know that we are just all at once noticing other blue Blazer’s. • To combat this tendency, we should be alert to counter examples, picking out the number of Silverado’s or Ranger’s on the road, or taking a sample of ten cars at a time to see the proportion of each brand. Then we will avoid the trap of seeing only what we are looking for and confirming what we already believe.

  26. Error margin • “ERROR MARGIN” expresses the range of random variation from sample to sample. • Look at table on page 360 • Note that the larger the random sample size (the larger n is), the smaller the error margin. • CONFIDENCE LEVEL: expresses the probability that samples of a given size will have values within that error margin.

  27. Stratification • For stratification we want to include all strata or classes that could have an important effect on our generalization. Every relevant group must be taken into account. For example, if we wanted to include in our argument about alcohol consumption in the United States, we should be sure that the sample includes the Northeast, the South, the West and the Midwest. We would want to include teenagers and senior citizens, rich and poor, different racial and ethnic groups, and so forth. If we left a segment of the class out our results would not be reliable.

  28. Loaded/slanted questions • Please see pages 368-369

  29. Summary • Check for adequate size in terms of the nature of the subject matter. In an experimental situation, take repeated samples until the results begin to repeat themselves. • Be sure the generalization is random and free of bias in the sampling, so that each of the relevant elements has an equal chance of being chosen. • Make certain the sample is stratified, which means that all relevant categories are included and none is excluded that would significantly affect the generalization.

  30. The explanatory hypothesis • A hypothesis can be defined as an explanatory principle accounting for known facts. In hypothetical thinking we want to know why something is true, and we reason backward to find some explanation for the facts, one that makes sense of them. We use our imagination to find some reason why things are the way they are.

  31. The explanatory hypothesis II • Trial lawyers employ hypothesis in trying to construct a plausible account of a crime. • The prosecuting attorney might argue that because the accused man was apprehended in the vicinity of the jewelry store with the stolen diamonds in his pocket and has a history of arrests for larceny, he must have committed the crime. • The defense attorney, on the other hand, might construct a scenario that the accused was walking to a nearby shop to buy a cigar and found the diamonds on the ground just as the police car drove up; the real thief must have dropped them while he was running from the scene of the crime.

  32. The explanatory hypothesis III • Some hypothesis, of course, bear little relation to reality. For example, rain dances will cause it to rain or sacrificing virgins will appease the gods. • What separates a reliable hypothesis from an unreliable one? 1. Consistency with other hypothesis we accept. A new hypothesis should be consistent with the bulk of hypothesis that we believe to be true. Of course, sometimes a new hypothesis will put us into a whole other paradigm. For example, Copernicus revolutionized astronomy by proposing that the Sun, not the Earth, is the center of the solar system.

  33. The explanatory hypothesis IV 2. Plausibility. Since we have explanations for a great deal of occurrences in the natural world, any new hypothesis must be plausible according to common sense and tradition explanations. Plausibility is a rough assessment of how credible a claim seems to us. For example, “McDonalds has sold more hamburgers than any other fast food chain” seems plausible. However, the claim “Charlie’s 87 year old grandmother swam across lake Michigan in the middle of winter” seems less plausible because of the obvious way it conflicts with what we know about 87 year old bodies, about lake Michigan, about swimming in cold water, and so forth. We would want to see it before we believed it.

  34. The explanatory hypothesis V • Comprehensiveness. Any hypothesis that we present should be the most complete explanation that we can find. • Suppose we are writing an English paper on Huckleberry Finn by Mark Twain. We could claim that the book presents a portrait of life on the Mississippi river before the Civil War, or that it is a classic adventure tale. Or that Twain’s purposes was to write a work of irony showing how expressive American speech could be. However, a more comprehensive interpretation that could incorporate these elements might be that the book is about Huck Finn’s desire for freedom, honesty, and justice set against the cruelty and prejudices of the a complacent society. This is manifested especially through his flight from home and his friendship with Jim, which requires him to break the moral rules and the law itself.

  35. The explanatory hypothesis VI • Simplicity. This principle is attributed to the fourteenth-century theologian/philosopher William of Ockham, and it is also known as Ockham’s Razor. It states that “entities should not be multiplied beyond what is required,” that is keep it simple whenever it is possible.

  36. The explanatory hypothesis VII • Predictability. If our hypotheses is sound, we should be able to predict events based on that assumption. That is, given the conditions described in our hypothesis, we can expect certain results to follow.

More Related