# FIRST ASSIGMENT For the Death Penalty article Answer this assignment in the following manner: 1. What is the Value Conflict? Opponents vs Proponents 2. Identify some general Assumptions in the article

### Relax! Stop worrying about deadlines and let our professional writers help you. Hire an essay writer helper and receive a professional assignment before your deadline. We provide writing services for all types of academic assignments.

Order a Similar Paper Order a Different Paper

FIRST ASSIGMENT

For the Death Penalty article Answer this assignment in the following manner:

1. What is the Value Conflict? Opponents vs Proponents

2. Identify some general Assumptions in the article (Unstated beliefs).

3. Identify the Descriptive Assumption (Unstated belief).

Since assumptions are not stated, you will not be able to quote a statement made by the author, but must analyze the premise (something assumed or taken for granted) of the statement.

What are the Values Conflicts and Assumptions?

Remember, the value conflict is between the opponents and proponents of the death penalty based on their respective value preferences. Assumptions are unstated beliefs; they are not stated in the article. Can you support or refute these assumptions with additional quality evidence?

For example: By definition assumptions are unstated. When the author states: “Even life in prison does not guarantee that they will not kill again”. Certain murderers have murdered again in prison, so this claim is factual. Some assumptions are that:

- An assumption is that persons who murder consider the consequences of their actions, but do they?
- That it can only be prevented by implementing the death penalty. (However, states without or without the death penalty have managed violent prisoners by building super max prisons and isolating these prisoners from the general population.)

Do you see that the assumption is derived from the factual claim as an unstated belief that supports the explicit reasoning. We can refute the premises (assumptions) upon which this claim is made by identifying the underlying assumptions and producing counterarguments.

What are the Descriptive Assumptions?

What are the Descriptive Assumptions in the Death Penalty Article? Descriptive assumptions are also unstated. They are the glue that binds the reasons to the conclusion. The descriptive assumption is the main underlying assumption in the article. What must we assume about individuals convicted of murder? Don’t over think this. If you would support the death penalty with the intensity of the author, what would you believe in your heart of hearts?

SECOND ASSIGMENT

1. Determine the mean, mode, median and range for the 16 decades of hurricanes data. Consider this the known population. Compare your sample mean selected in last week’s assignment to the population mean (16 decades). How does it compare? Is your sample mean representative of the population mean? Now think about the process of random sampling and the normal distribution when answering the assignment questions.

2. Answer the following questions.

a. Statistical analysis of the 16 decades of hurricane data reveals a 95% Confidence Interval of +/-1.10652102. Add and subtract this number from the population mean to determine the 95% CI value, which is the value 2 standard deviation’s above and below the mean.

b. Suppose 10 hurricanes were to occur. Based on the 95% CI, what is the probability of such an occurrence? Think Normal Distribution.

c. Based on statistical theory of the normal distribution, what conclusion can you draw?

FIRST ASSIGMENT For the Death Penalty article Answer this assignment in the following manner: 1. What is the Value Conflict? Opponents vs Proponents 2. Identify some general Assumptions in the article

Module 3 Critical Thinking What are the Value Conflict and Assumptions, and Descriptive Assumptions? II. Course Objectives: The course will enable students to: Analyze the structure of an argument. Understand the way language is used to influence thinking, feelings and behavior. Identify ambiguities, assumptions, values, and fallacies in reasoning. Assumptions Are: *Hidden or unstated *Taken for granted *Influential in determining the conclusion *Necessary for the reasoning to make sense *Potentially deceptive Since assumptions influence the conclusion, and they make the reasoning work, you should look for assumptions in the movement from the reasons to the conclusions. Once you can find assumptions you can then see if they are valid. Values Values are unstated ideas that people see as worthwhile. They provide standards of conduct by which we measure the quality of human behavior. Value Assumptions Value assumptions are taken-for-granted beliefs about the relative desirability of certain competing values. It is an implicit preference for one value over another in a particular context. Value assumptions are beliefs about the way the world should be. Value assumptions are also called prescriptive assumptions. Value Conflicts Value conflicts arise from the differing values that stem from different frames of reference. Typical Value conflict and Sample Controversies 1. Loyalty–Honesty. 1. Should you tell your parents about your sister’s drug habit? 2. Competition–Cooperation. 2. Do you support the grading system? 3. Freedom of Press–National Security. 3. Is it wise to hold weekly presidential press conferences? 4. Equality–Individualism. 4. Are racial quotas for employment fair? 5. Order–Freedom of Speech. 5. Should we imprison those with radical ideas? 6. Security–Excitement. 6. Should you choose a dangerous profession? 7. Generosity–Material Success. 7. Is it desirable to give financial help to a beggar? 8. Rationality–Spontaneity. 8. Should you check the odds before placing a bet? 9. Tradition–Novelty. 9. Should divorces be easily available? TIPS FOR IDENTIFYING VALUE ASSUMPTIONS * Investigate the Communicator’s Background Learning about the communicator is invaluable. Discovering the values preferences of the communicator is key to understanding their point of view. For example, the topic is minimum wage. A labor leader will define minimum wage quite differently from the definition of a member of corporate management. * Ask “Why do the consequences of the communicator’s position seem so important to them?” Often the desirability of a communicator’s outcome is based on their own personal beliefs, as opposed to their objectivity. Suppose someone wants to ban automobiles in order to eliminate pollution. Obviously this person is an ardent environmentalist. Yet reading into their value preferences you can see that they put a premium on the environment and place a lesser value on convenience. *Search for Similar Social Controversies to Find Analogous Value Assumptions Many social controversies have analogous value assumptions. A person who supports an increase in welfare funding is also like to support the expansion of education and training programs for welfare recipients. *Use Reverse Role Playing Take a position contrary to that of the communicator’s. This tactic will allow you to see the pros and cons of the communicator’s point of view and to expand the list of value preferences. *Look for Common Value Conflicts Many social controversies utilize the same value conflict. A very common value conflict, found in many arguments, is the right of an individual versus the welfare of the group. Another popular value conflict is individual responsibility versus collective responsibility. Critical Thinking What Are The Descriptive Assumptions? DESCPRIPTIVE ASSUMPTIONS Descriptive assumptions are beliefs about the way the world is, was, or is going to be. All of the general rules and definitions of assumptions that apply to value assumptions are applicable to descriptive assumptions as well. TIPS FOR IDENTIFYING DESCRIPTIVE ASSUMPTIONS *Keep Thinking About The Gap Between The Conclusion And The Reasons Remember that assumptions are hidden or unstated ideas that are influential in determining the conclusion. Therefore, look for assumptions in the movement between the reasons and the conclusion. *Look For Ideas That Support The Conclusion Reasons are often presented without any explicit support. It is the implicit reasoning, or the assumption, that makes the argument work. Whenever you identify a reason you should be able to explicitly explain, including incorporating the assumptions, why it supports the conclusion. *Identify With The Communicator Learn more about the communicator and their beliefs. If you can identify with the communicator you can imagine what reasoning they use to arrive at their conclusion. *Identify With The Opposition If you are unsuccessful in identifying with the communicator you can always opt to identify with the opposition. Consider the reasoning required to take the opposing point of view. *Recognize The Potential Existence Of Other Means Of Attaining The Advantages Referred To In The Reasons While there can be many ways of achieving a goal, the communicator wants you to believe that their plan is the most sensible way. If you can demonstrate another means by which to achieve the same goal as the communicator, you have discovered an assumption. *Learn More About The Issue If an issue is important enough, you should learn as much as you can about it. The more you learn about a particular issue the easier it will be for you to discover the assumptions associated with that issue. 2. Statistics The Well Chosen Average II. Course Objectives: The course will enable students to: Recognize statistical generalizations, manipulations, and methods in a variety of contexts. Assess the accuracy and value/worth of claims and arguments. Draw and defend reasonable conclusions from presented evidence. Improve composition and writing skills, especially in logical organization. The mean or average is the sum of all the data point divided by the total number in the sample. The mode is the most frequently occurring number in the sample. The median is the middle number. The range is the lowest to highest number in the sample. National Hurricane Center: Category 3-5 Hurricanes 1851-2000 1851-1860 1861-1870 1871-1880 1881-1890 1891-1900 1901-1910 1911-1920 1921-1930 1931-1940 1941-1950 10 10 1951-1960 11 1961-1970 12 1971-1980 13 1981-1990 14 1991-2000 15 Let’s take a random sample from the above hurricane data from this randomly generated list of numbers: 4,7,8,9,13 Random number- corresponding sample 4 1881-1890 5 7 1911-1920 7 8 1921-1930 5 9 1931-1940 8 13 1971-1980 4 Arrange your sample in ascending order: 4, 5, 5, 7, 8 Now determine the mean, mode, median and range for the sample. Did you get the following answers? Mean: 5.8 Mode: 5 Median: 5 Range: 4-8 What you now know as the mean, mode, and median constitutes what is referred to as Measures of Central Tendency. Do you know why? Since we selected our samples randomly, with each unit having an equal chance of selection, there is a tendency for these data to form a normal distribution or normal curve. That is a bell shaped curve in which the mean, mode, and median all line up in the center. Why is this so important? Because many of the human characteristic or variables we measure are normally distributed, such as income, rain fall, stock prices, IQ, and hurricanes to name a few. More importantly, the Central Limit theorem tells us if we take repeated random samples from a population, the shape of the repeated sample means will form a normal distribution. Why is this important? Knowledge of the Normal Distribution will permit us to generalize or draw inferences about the population from a single sample. Thus, the Central Limit Theory and the Normal Distribution are the corner stone of statistical reasoning in critical thinking. The normal distribution is a descriptive model that describes real world situations. It is described as a continuous frequency distribution of infinite range and is the most important probability distribution in statistical reasoning and important tool in the analysis of data. It links frequency and probability into what is referred to as the empirical rule. The empirical rule is an estimate of the spread of data given the mean and standard for a set of data that follows the normal distribution. The empirical rule states that: 68% of the data will fall within 1 standard deviation of the mean 95% of the data will fall within 2 standard deviations of the mean 99% of the data will fall with 3 standard deviations of the mean •Note these values are approximations. For example, according to the normal curve probability density function, 95% of the data will fall within 1.96 standard deviations of the mean; thus, 2 standard deviations is a convenient approximation. •Additionally, 99% of the data will fall within 2.58 standard deviations of the mean or approximately 3 standard deviations. Chapter 3: The Little Figures That Are Not There Topics covered in Huff: -Hypothesis testing, sample size, confidence intervals and p-values. -Regression to the mean -A hypothesis is a statement of prediction. As Huff states, Doakes toothpaste will result in no change (null hypothesis); increased cavities; or decreased cavities. More formally, these are hypotheses are stated as: Ho: The Null Hypothesis- No difference in cavities H1: The Ist Alternative hypothesis- Increase in cavities H2: The 2nd Alternative hypothesis- Decrease in cavities What criteria can we use to determine if Doakes toothpaste is superior to any other toothpaste? Clearly Doakes would like to demonstrate that their toothpaste results in fewer cavities or H2, the second alternative hypothesis. But before they can do that they must reject Ho, the null hypothesis. First we need to understand how sample size can influence sampling error. As Huff points out, an inadequate sample size can by chance alone produce a statistically significant finding. Intuitively you know that the probability of tossing a coin and obtaining either heads or tails is 50/50. But is it? If the number of trials is limited, you are not likely to obtain 50/50 results, but rather something far different. The Law of large Numbers tells us that as the number of trials of a random event increases (coin tosses), the percentage difference between the expected and actual outcomes will approach zero, but only when a sufficiently large number of trials have been performed. Small samples therefore tend to be more variable because they cannot capture the full range of units in the population. Larger samples on the other hand, capture a wider range of units and therefore are likely to be more representative and less variable. Unfortunately, no matter how we try, our sample will never be a perfect representation of the population due to random or sampling error. And so, in the words of Aristotle, “It is the nature of probability that improbable things will happen.” This variability about the mean is referred to as the standard error or standard deviation. Confidence intervals, which are derived from the standard deviation, are frequently reported in the peer review literature and provide us the little figures that are not there. If we take repeated random samples of the hurricane data, the distribution of these means will form a normal distribution or bell curve (see Central Limit theorem and normal distribution). The variability for a sampling distribution (multiple samples of the population mean) is called the standard error of the mean. The variability for a single sample is called the standard deviation and from this confidence intervals are determine, most commonly, the 95% CI. You will recall from previous lectures that the normal distribution is a descriptive model that links frequency distribution to probability distribution and will enable us to make some predictions. Consequently, we know that in a normal distribution, 95% of the area under the curve is 2 standard deviations from the mean. We can reason then that 95% of the time, the sample mean will fall within 2 standard errors (deviation) of the mean. Expressed another way we are 95% certain (confident) that the true mean lies somewhere between 2 standard deviations of the mean. Often, the only information we have about a population comes from a single sample. We are going to make an estimation of the population parameters from this single sample and make inferences and predictions about the population. We can do so, because of the Central Limit Theorem and the Normal Distribution. Below is an example of the normal distribution of hurricane data with a mean of 5.8 and the 95%CI of 3.8-7.8. Knowing the mean and confidence interval for the hurricane data, take several random samples of just 2 data points. This in effect constitutes an inadequate sampling and we should see greater variability. What is the mean of your sample? Does it fall within the 95%CI? Mean 5.8 (-2 sd) 3.8 (+2 sd) 7.8 The 95% confidence interval can be used to provide an objective standard by which we make a decision, quantifying the likelihood of an outcome. Our decision rule is dichotomous. We are either going to accept the null or we are going to reject the null. Either there is a difference or there is no difference. In MAT 151 you will calculate the actual p value for your sample. In this instance, we are only concern with the statistical reasoning aspect of the decision rule, not the calculations. If a subsequent sample mean was 10, what is the probability of such an occurrence? Answer: P<.05. Since the upper CI is 7.8 and our sample mean is 10, it falls outside the upper CI. We would expect a sample mean of 10 only 5 times in 100 or 5 percent of the time. It is unlikely then that the true mean lies outside this confidence interval. How unlikely? Well the p-value (probability value) corresponding to the 95% CI is p <. 05. Only 5 times in 100 would we expect the true mean to be outside this area, either higher or lower than 2 Standard deviations. When you read a research paper and it states that the researcher set the p-value @ p<.05, the researcher is establishing the Level of Significance for the study or the criteria for rejecting the Null hypothesis (No difference). 5 times out of 100 we are going to obtain a spurious finding due to no other reason than random sampling error. Small samples are more likely to differ from one another; that is to have greater variability (a large standard error of the mean). Larger samples are less variable and more likely to have a smaller standard error of the mean because they include a larger proportion of the population. Having established the Level of Significance for a study, typically P<.05, a researcher is telling us the criteria for rejecting the null hypothesis. In statistical parlance, we might ask: What is the probability of obtaining a difference between the means as large or larger than that obtain by Doakes toothpaste due to random error? That is; what is the probability of rejecting the null hypothesis when the null is true? This is what the Level of Significance or p value tells us. It is the probability of committing a false positive error or type 1 error. We know if Doakes toothpaste repeats their study 100 times, 5 times in 100 they will reject the null when the null is true or commit a false positive error. The next time you read a research paper that states there is a significant difference, ask yourself: Is this a real difference or one caused by random error? On the other hand, when you read a paper that states there is no statistically significant difference, ask yourself: Did the researcher accept the null when the null is false and commit a type 2 error? A type 2 error is a false negative error. In the polio study, the researchers failed to have a sample sufficient to show an effect if one exists. Huff suggests that 15 to 20 times the sample size would be needed to show an effect if one exists; thus, it is likely they committed a type 2 error: accepting the null when the null is false. It is probably a good idea to maintain a healthy level of skepticism when reading research papers and it is for this reason that you should require a preponderance of evidence in elucidating the truth. For example, look at the Wakefield study that hypothesized that MMR causes Autism. This was published in the British Medical Journal Lancet in 1998. The sample in this study consisted of only 12 children. While still controversial, a preponderance of evidence now suggests that MMR is not causally related to Autism. Regression to the mean: How tall will you grow? Huff is referring to the well-known study by Sir Francis Galton (1886), who calculated the height of adults and their children. Sir Francis Galton discovered, that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents and when the average height of the parents was shorter than the mean of the population, the children tended to be taller than their parents. Thus, he is credited with discovering the statistical phenomenon we now call regression to the mean. Regression to the mean is a statistical phenomenon that occurs when a non-random sample is selected based on extreme scores. In the article, Hurricanes and Global Warming, the authors state that, “there is a push on climatologist to say something about extremes, because they are so important” (Pielke et al. 2005). Well, it may not surprise you that the year following Katrina (2005 was supposed to be an example of the kind of hurricanes we could expect due to global warming) that the season ended without a single hurricane hitting the US (National Hurricane Center). Is this regression to the mean? History and the literature are replete with examples of regression to the mean. Be on the outlook for this phenomenon so you don’t fall prey to this regression fallacy. Whether in research articles or daily reporting, whenever you are working with data, you must be on the lookout for the regression artifact. See additional readings located in Course Information.

FIRST ASSIGMENT For the Death Penalty article Answer this assignment in the following manner: 1. What is the Value Conflict? Opponents vs Proponents 2. Identify some general Assumptions in the article

A Case for the Death Penalty (1) Troy Davis, 42, died at 11:08 p.m. according to the Georgia Department of Corrections. (2) His death by lethal injection came 19 years after he was convicted by a jury of his peers for the brutal murder of off duty police officer Mark MacPhail. (3) Moments before his execution, Davis reportedly told the family of Mr. MacPail, “I’m not the one who personally killed your son, your father, your brother. I am innocent.” (4) Mr. Davis’ newfound reverence for life, stems in no small part, from the fact that he was about to lose his own. (5) Life is precious and the death penalty just reaffirms that fact. (6) I support the death penalty for cop killers and heinous crimes of murder. (7) The death penalty is a deterrent. (8) Without a doubt, Mr. Davis will never kill again. (9) We don’t have to like the death penalty in order to support it. (10) We must fight fire with fire. (11) If someone comes down with cancer, it may be necessary to take radical steps to cure the cancer: radical surgery, radiation, and chemotherapy. (12) The disease in this case is injustice. (13) Should this cop killer be given clemency? (14) We may not like the death penalty, but it must be available for such heinous crimes; otherwise, we are giving criminals, like Mr. Davis, a license to kill. (15) The evidence is clear. (16) When executions went down, the number of murders went up. (17) Looking at the data from 1950-2002, the murder rate went from 4.6 per 100,000 population in 1951 to 10.2 per 100,000 population in 1980, as executions went to zero during the period the Supreme Court declared capital punishment unconstitutional. (18) Execution resumed in 1977. (19) As you can see, the murder rate once again declined (see chart below). (20) Opponents of the death penalty often make the argument that we might kill an innocent person. (21) Mark MacPail was an innocent person who was executed by Mr. Davis. (22) He received no appeals to the Supreme Court; no appeals for clemency. (23) Mr. Davis killed in cold blood. (24) It is fallacy to argue that the death penalty should be abolished because an innocent person might die. (25) Innocent persons are dying all the time; however, only the murderers have the chance to appeal their sentence. (26) In 2010, fifty-six police officers were killed in the line of duty in the US. (27) No doubt by someone who had murdered before. (28) Even life in prison does not guarantee that they will not kill again. (29) All too often, these individuals kill again in prison. (30) Life without parole does not always mean life without parole. (31) California is about to release teen murderers, including cop killers, who were sentenced to life without parole for their crimes. (32) When we lower the penalty for murder, it diminishes regard for the value of the victim’s life. (33) Support for the death penalty comes from a surprising group of people—Kant, Locke, Hobbes, Rousseau, Montesquieu, and Mill agreed that natural law properly authorizes the State to take life in order to administer justice. (34) Washington, Jefferson, and Franklin endorsed it. Abraham Lincoln authorized executions for deserters in wartime. (35) Alexis de Tocqueville, the author of Democracy in America, believed that the death penalty was essential to the support of social order. (36) The United States Constitution condemns cruel and inhuman punishment, but does not condemn capital punishment. (37) Rick Perry stated that,” Texas has a very thoughtful, lengthy, and clear process, which ensures everyone a fair hearing, so there is no need to lose sleep over the possibility of executing an innocent person”. (38) The appeals process is indeed lengthy. (39) Mr. Davis had 19 years of appeals and the Supreme Court reaffirmed his guilt. (40) Finally justice has been served.

### Great students hand in great papers. Order our essay service if you want to meet all the deadlines on time and get top grades. Professional custom writing is the choice of goal-focused students. Word on the online streets is... we're simply the best!

Get a 15% discount on your order using the following coupon code SAVE15

Order a Similar Paper Order a Different Paper