AIOU Course Code 8417-1 Solved Assignment Autumn 2021

 

 

ALLAMA IQBAL OPEN UNIVERSITY ISLAMABAD

(Department of Business Administration)

 

Course: Statistical Inference (8417)                               Semester: Autumn, 2021

Level: BBA (4 Years)

 

 

ASSIGNMENT No. 1

 

  1. 1 Discuss the following: (20)
  2. One Sample Testing Statistics

 

 

The one sample t test compares the mean of your sample data to a known value. For example, you might want to know how your sample mean compares to the population mean. You should run a one sample t test when you don’t know the population standard deviation or you have a small sample size. For a full rundown on which test to use, see: T-score vs. Z-Score.

 

Assumptions of the test (your data should meet these requirements for the test to be valid):

 

Data is independent.

Data is collected randomly. For example, with simple random sampling.

The data is approximately normally distributed.

 

 

Example question: your company wants to improve sales. Past sales data indicate that the average sale was $100 per transaction. After training your sales force, recent sales data (taken from a sample of 25 salesmen) indicates an average sale of $130, with a standard deviation of $15. Did the training work? Test your hypothesis at a 5% alpha level.

 

Step 1: Write your null hypothesis statement (How to state a null hypothesis). The accepted hypothesis is that there is no difference in sales, so:

H0: μ = $100.

 

Step 2: Write your alternate hypothesis. This is the one you’re testing in the one sample t test. You think that there is a difference (that the mean sales increased), so:

H1: μ > $100.

 

Step 3: Identify the following pieces of information you’ll need to calculate the test statistic. The question should give you these items:

 

The sample mean(x̄). This is given in the question as $130.

The population mean(μ). Given as $100 (from past data).

The sample standard deviation(s) = $15.

Number of observations(n) = 25.

Step 4: Insert the items from above into the t score formula.

one sample t test

 

 

t = (130 – 100) / ((15 / √(25))

t = (30 / 3) = 10

This is your calculated t-value.

 

Step 5: Find the t-table value. You need two values to find this:

 

The alpha level: given as 5% in the question.

The degrees of freedom, which is the number of items in the sample (n) minus 1: 25 – 1 = 24.

Look up 24 degrees of freedom in the left column and 0.05 in the top row. The intersection is 1.711. This is your one-tailed critical t-value.

 

What this critical value means in a one tailed t test, is that we would expect most values to fall under 1.711. If our calculated t-value (from Step 4) falls within this range, the null hypothesis is likely true.

 

Step 5: Compare Step 4 to Step 5. The value from Step 4 does not fall into the range calculated in Step 5, so we can reject the null hypothesis. The value of 10 falls into the rejection region (the left tail).

 

In other words, it’s highly likely that the mean sale is greater. The one sample t test has told us that sales training was probably a success.

 

 

 

  1. Non-Probability Sampling

 

 

Non-probability sampling (sometimes nonprobability sampling) is a branch of sample selection that uses non-random ways to select a group of people to participate in research.

 

Unlike probability sampling and its methods, non-probability sampling doesn’t focus on accurately representing all members of a large population within a smaller sample group of participants. As a result, not all members of the population have an equal chance of participating in the study.

 

In fact, some research would deliver better results if non-probability sampling was used. For example, if you’re trying to access hard-to-reach social groups that aren’t usually visible, then a representative sample wouldn’t yield suitable candidates.

 

Instead, you may opt to select a sample based on your own reasons, including subjective judgment, sheer convenience, volunteers, or – in the above example – referrals from hidden members of society willing to speak out.

 

use non-probability sampling?

Non-probability sampling is typically used when access to a full population is limited or not needed, as well as in the following instances:

 

You may want to gain the views of only a niche or targeted set of people, based on their location or characteristics. To ensure that there is plenty of data about the views of these specific people, it would make sense to have a sample full of people meeting the criteria.

If there is a target market that you want to enter, it may be worthwhile doing a small pilot or exploratory research to see if new products and services are feasible to launch.

If money and time are limited, non-probability sampling allows you to find sample candidates without investing a lot of resources.

Where members are not represented traditionally in large populations or fly under the radar, like far-left and right-wing groups, it’s necessary to approach these subjects differently.

 

 

  1. Estimation

 

 

estimation, in statistics, any of numerous procedures used to calculate the value of some property of a population from observations of a sample drawn from the population. A point estimate, for example, is the single number most likely to express the value of the property. An interval estimate defines a range within which the value of the property can be expected (with a specified degree of confidence) to fall. The 18th-century English theologian and mathematician Thomas Bayes was instrumental in the development of Bayesian estimation to facilitate revision of estimates on the basis of further information. (See Bayes’s theorem.) In sequential estimation the experimenter evaluates the precision of the estimate during the sampling process, which is terminated as soon as the desired degree of precision has been achieved.

 

 

 

  1. Non-Parametric Tests

 

 

Non-parametric tests, as their name tells us, are statistical tests without parameters. For these types of tests you need not characterize your population’s distribution based on specific parameters. They are also referred to as distribution-free tests due to the fact that they are based n fewer assumptions (e.g. normal distribution). These tests are particularly used for testing hypothesis, whose data is usually non normal and resists transformation of any kind. Due to the lesser amount of assumptions needed, these tests are relatively easier to perform. They are also more robust. An added advantage is the reduction in the effect of outliers and variance heterogeneity on our results. This test can be used for ordinal and sometimes even for nominal data. However, nonparametric tests do have their own disadvantages as well. Firstly, the results that they provide may be less powerful compared to the results provided by the parametric tests. To overcome this problem it is preferred that a larger number of samples be taken if one is adopting this approach. Secondly, their results are usually more difficult to interpret than the results of parametric tests. This is because we usually assign ranks to samples in the case of non-parametric tests rather than using the original data. This further complicates the system and distorts our intuitive understanding of the data. Non-parametric tests are useful and important in many cases, but they may not provide us with the ideal results.

 

 

 

 

  1. 2 What is sampling? Explain different types of non-probability sampling techniques in detail with examples. (20)

 

When you conduct research about a group of people, it’s rarely possible to collect data from every person in that group. Instead, you select a sample. The sample is the group of individuals who will actually participate in the research.

 

To draw valid conclusions from your results, you have to carefully decide how you will select a sample that is representative of the group as a whole. There are two types of sampling methods:

 

Probability sampling involves random selection, allowing you to make strong statistical inferences about the whole group.

Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect data.

You should clearly explain how you selected your sample in the methodology section of your paper or thesis.

 

First, you need to understand the difference between a population and a sample, and identify the target population of your research.

 

The population is the entire group that you want to draw conclusions about.

The sample is the specific group of individuals that you will collect data from.

The population can be defined in terms of geographical location, age, income, and many other characteristics.

 

Population vs sampleIt can be very broad or quite narrow: maybe you want to make inferences about the whole adult population of your country; maybe your research focuses on customers of a certain company, patients with a specific health condition, or students in a single school.

 

It is important to carefully define your target population according to the purpose and practicalities of your project.

 

If the population is very large, demographically mixed, and geographically dispersed, it might be difficult to gain access to a representative sample.

 

Sampling frame

The sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).

 

Example

You are doing research on working conditions at Company X. Your population is all 1000 employees of the company. Your sampling frame is the company’s HR database which lists the names and contact details of every employee.

 

Sample size

The number of individuals you should include in your sample depends on various factors, including the size and variability of the population and your research design. There are different sample size calculators and formulas depending on what you want to achieve with statistical analysis.

 

Probability sampling methods

Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research. If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice.

 

There are four main types of probability sample.

 

Probability sampling

  1. Simple random sampling

In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population.

 

To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.

 

Example

You want to select a simple random sample of 100 employees of Company X. You assign a number to every employee in the company database from 1 to 1000, and use a random number generator to select 100 numbers.

 

  1. Systematic sampling

Systematic sampling is similar to simple random sampling, but it is usually slightly easier to conduct. Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.

 

Example

All employees of the company are listed in alphabetical order. From the first 10 numbers, you randomly select a starting point: number 6. From number 6 onwards, every 10th person on the list is selected (6, 16, 26, 36, and so on), and you end up with a sample of 100 people.

 

If you use this technique, it is important to make sure that there is no hidden pattern in the list that might skew the sample. For example, if the HR database groups employees by team, and team members are listed in order of seniority, there is a risk that your interval might skip over people in junior roles, resulting in a sample that is skewed towards senior employees.

 

  1. Stratified sampling

Stratified sampling involves dividing the population into subpopulations that may differ in important ways. It allows you draw more precise conclusions by ensuring that every subgroup is properly represented in the sample.

 

To use this sampling method, you divide the population into subgroups (called strata) based on the relevant characteristic (e.g. gender, age range, income bracket, job role).

 

Based on the overall proportions of the population, you calculate how many people should be sampled from each subgroup. Then you use random or systematic sampling to select a sample from each subgroup.

 

Example

The company has 800 female employees and 200 male employees. You want to ensure that the sample reflects the gender balance of the company, so you sort the population into two strata based on gender. Then you use random sampling on each group, selecting 80 women and 20 men, which gives you a representative sample of 100 people.

 

  1. Cluster sampling

Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you randomly select entire subgroups.

 

If it is practically possible, you might include every individual from each sampled cluster. If the clusters themselves are large, you can also sample individuals from within each cluster using one of the techniques above. This is called multistage sampling.

 

This method is good for dealing with large and dispersed populations, but there is more risk of error in the sample, as there could be substantial differences between clusters. It’s difficult to guarantee that the sampled clusters are really representative of the whole population.

 

Example

The company has offices in 10 cities across the country (all with roughly the same number of employees in similar roles). You don’t have the capacity to travel to every office to collect your data, so you use random sampling to select 3 offices – these are your clusters.

 

What is your plagiarism score?

Compare your paper with over 60 billion web pages and 30 million publications.

 

Best plagiarism checker of 2021

Plagiarism report & percentage

Largest plagiarism database

 

Non-probability sampling methods

In a non-probability sample, individuals are selected based on non-random criteria, and not every individual has a chance of being included.

 

This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias. That means the inferences you can make about the population are weaker than with probability samples, and your conclusions may be more limited. If you use a non-probability sample, you should still aim to make it as representative of the population as possible.

 

Non-probability sampling techniques are often used in exploratory and qualitative research. In these types of research, the aim is not to test a hypothesis about a broad population, but to develop an initial understanding of a small or under-researched population.

 

Non probability sampling

  1. Convenience sampling

A convenience sample simply includes the individuals who happen to be most accessible to the researcher.

 

This is an easy and inexpensive way to gather initial data, but there is no way to tell if the sample is representative of the population, so it can’t produce generalizable results.

 

Example

You are researching opinions about student support services in your university, so after each of your classes, you ask your fellow students to complete a survey on the topic. This is a convenient way to gather data, but as you only surveyed students taking the same classes as you at the same level, the sample is not representative of all the students at your university.

 

  1. Voluntary response sampling

Similar to a convenience sample, a voluntary response sample is mainly based on ease of access. Instead of the researcher choosing participants and directly contacting them, people volunteer themselves (e.g. by responding to a public online survey).

 

Voluntary response samples are always at least somewhat biased, as some people will inherently be more likely to volunteer than others.

 

Example

You send out the survey to all students at your university and a lot of students decide to complete it. This can certainly give you some insight into the topic, but the people who responded are more likely to be those who have strong opinions about the student support services, so you can’t be sure that their opinions are representative of all students.

 

  1. Purposive sampling

This type of sampling, also known as judgement sampling, involves the researcher using their expertise to select a sample that is most useful to the purposes of the research.

 

It is often used in qualitative research, where the researcher wants to gain detailed knowledge about a specific phenomenon rather than make statistical inferences, or where the population is very small and specific. An effective purposive sample must have clear criteria and rationale for inclusion.

 

Example

You want to know more about the opinions and experiences of disabled students at your university, so you purposefully select a number of students with different support needs in order to gather a varied range of data on their experiences with student services.

 

  1. Snowball sampling

If the population is hard to access, snowball sampling can be used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people.

 

Example

You are researching experiences of homelessness in your city. Since there is no list of all homeless people in the city, probability sampling isn’t possible. You meet one person who agrees to participate in the research, and she puts you in contact with other homeless people that she knows in the area.

 

 

  1. 3 The University of North Carolina is conducting a study on the average weight of the many bricks that make up the University’s walkways. Workers are sent to dig up and weigh a sample of 421 bricks and the average brick weight of this sample was 14.2 lb. It is a well-known fact that the standard deviation of brick weight is 0.8 ln. (20)

 

 

Point Estimates: The sample mean Xis the best estimator of the population mean .It is unbiased, consistent,  the  most  efficient  estimator,  and,  as  long  as  the  sample  is  sufficiently  large,  its  sampling distribution can be approximated by the normal distribution.If we know the sampling distribution of X, we can make statements about any estimate we may make from sampling information. Let’s look at a medical-supplies company thatproduces disposable hypodermic syringes. Each syringe is wrapped in a sterile package and then jumble-packed in a large corrugated carton. Jumble packing causes the cartons to contain differing numbers of syringes. Because the syringes are sold on a per unit basis, the company needs an estimate of the number of syringes per carton for billing purposes. We have taken a  sample 35  cartons  at  random  and  recorded  the  number  of  syringes  in  each  carton.  The  following  table illustrates our results. We know that 357010235XXnsyringes.Thus, using the sample mean Xas our estimator, the point estimate of the population mean is 102 syringes per carton. The manufactured price of a disposable hypodermicsyringe is quite small (about 25 C), so  both  the  buyer  and  seller  would  accept  the  use  of  this  point  estimate  as  the  basis  for  billing  and  the manufacturer can save the time and expense of counting each syringe that goes into a cartoon.

 

Hence, 36.126.01S.So, sample variance and sample standard deviation are respectively 36.12 and 6.01.Point Estimate of the Population Variance and Standard Deviation:Suppose  the  management  of  the  medical-supplies  company  wants  to  estimate  the  variance and/or  standard  deviation  of  the  distribution  of  the  number  of  packaged  syringes  per  carton. The  most  frequently  used  estimator  of  the  population  standard  deviationis  the  sample standard deviation s. We cancalculate the sample standard deviation as in Table 1 and discover that  it  is  6.01  syringes. If, instead of considering 1221Sxxnas our sample variance, we had  considered   122Sxxn, the result would have some bias as an  estimator of the population variance; specifically, it would tend to be too low. Using a divisor of n —1 gives us an unbiased estimator of cr2. Thus, we will use s2and s  to estimate cr2and .Point Estimate of the PopulationProportion:The proportion of units that have a particular characteristic in a given population is symbolized p. If we know the proportion of units in a sample that have that same characteristic (symbolized p), we can use this pas an estimator of p. It can be shown that phas all the desirable properties we discussed earlier.Continuing our example of the manufacturer of medical supplies, we shall try to estimate the populatio

proportion from the sample proportion. Suppose management wishes to estimate the number of cartons that will arrive damaged, owing to poor handling in shipment after the cartons leave the factory. We can check a sample of 50 cartons from their shipping point to the arrival at their destination and then record the presence or absence of damage. If, in this case, we find that the proportion of damaged cartons in the sample is 0.08, we would say that p= 0.08 = sample proportion damaged.Because  the  sample  proportion pisa  convenient  estimator  of  the  population  proportion p, we  can estimate that the proportion of damaged cartons in the population will also be 0.08.Problems to be solved:1) A stadium authority is considering expanding its seating capacity and needs to know both the average number  of  people  who  attend  events  there  and  the  variability  in  this  number.  The  following  are  the attendances (in thousands) at nine randomly selected sporting events. Findpoint estimates of the mean and the variances of the population from which the sample was drawn.

8.814.021.37.912.520.616.314.113.02)

 

The  Pizza  distribution  authority  (PDA)  has  developed  quite  a  business  in  some  area  by delivering pizza orderspromptly. PDA guarantees that its pizzas will be delivered in 30 minutes or less from the time the order was placed and if the delivery is late, the pizza is free. The time that  it  takes  to  deliver  each  pizza  order  that  is  on  time  is  recorded  in  the  Official  Pizza  Time Block  (OPTB)  and  the  delivery  time  for  those  pizzas  that  are  delivered  late  is  recorded  as  30 minutes in the OPTB. 12 random entries from the OPTB are listed as 15.329.530.010.130.019.610.812.214.830.022.118.3(a) Find the mean for the sample(b) From what population was this sample drawn?(c)  Can  this  sample  be  used  to  estimate  the  average  time  that  it  takes  for  PDA  to  deliver  a pizza? Explain.3) Dr Ahmed, a meteorologist for local television station would like to report theaverage rainfall for today on this evening’s newscast. The following are the rainfall measurements (in inches) for today’s date for 16 randomly chosen past years. Determine the sample mean rainfall.0.470.270.130.540.000.080.750.060.001.050.340.260.170.420.500.864) The Standard Chartered Bank is trying to determine the number of tellers available during the lunch rush on Sundays. The bank has collected data on the number of people who entered the bank during the last 3 months on Sunday from 11 A.M.to 1 P.M.Using the data below, find point estimates of the mean and standard deviation of the population from which the sample was drawn.

 

Electric  Pizza  was  considering  national  distribution  of  its  regionally  successful product  and  was compiling pro forma sales data. The average monthly sales figures (in thousands of dollars) from its 30 current  distributors  are  listed.  Treating  them  as  (a)  a  sample  and  (b)  a  population,  compute  the standard deviation.7.35.84.58.55.24.12.83.86.53.4

 

 

  1. a) Find the standard error of the mean

 

 

The standard error of the mean is a method used to determine the differences between more than one sample of data. It measures the accuracy with which sample data represents a population using standard deviation. In statistics, the standard deviation is a measure of how spread out numbers are, and “mean” refers to the average of the numbers. Standard error functions are used to validate the accuracy of a sample of multiple samples by analyzing the deviations within the means. You can use the standard deviation of the mean to describe how precise the mean of the sample is versus the true mean of the population. As the size of the sample increases, the mean of the population is known with greater specificity. Likewise, the smaller the standard error, the more representative the sample will be of the overall population.

 

For example, if you measure the weight of a large sample of men, their weights could range from 125 to 300 pounds. However, if you look at the mean of the sample data, the samples will only vary by a few pounds. You can then use the standard error of the mean to determine how much the weight varies from the mean.

 

Standard error of the mean formula

The formula for the standard error of the mean is expressed as:

 

SE = σ/√n

 

SE refers to the standard error of the sample

σ refers to sample deviation

n is the size of the sample.

 

 

  1. b) What is the interval around the sample mean that will include the population mean 95.5 percent of the time?

 

noted in earlier modules a key goal in applied biostatistics is to make inferences about unknown population parameters based on sample statistics. There are two broad areas of statistical inference, estimation and hypothesis testing. Estimation is the process of determining a likely value for a population parameter (e.g., the true population mean or population proportion) based on a random sample. In practice, we select a sample from the target population and use sample statistics (e.g., the sample mean or sample proportion) as estimates of the unknown parameter. The sample should be representative of the population, with participants selected at random from the population. In generating estimates, it is also important to quantify the precision of estimates from different samples.

 

 

 

Learning Objectives

After completing this module, the student will be able to:

 

Define point estimate, standard error, confidence level and margin of error

Compare and contrast standard error and margin of error

Compute and interpret confidence intervals for means and proportions

Differentiate independent and matched or paired samples

Compute confidence intervals for the difference in means and proportions in independent samples and for the mean difference in paired samples

Identify the appropriate confidence interval formula based on type of outcome variable and number of samples

 

 

busph_subbrand.gif

 

Parameter Estimation

There are a number of population parameters of potential interest when one is estimating health outcomes (or “endpoints”). Many of the outcomes we are interested in estimating are either continuous or dichotomous variables, although there are other types which are discussed in a later module. The parameters to be estimated depend not only on whether the endpoint is continuous or dichotomous, but also on the number of groups being studied. Moreover, when two groups are being compared, it is important to establish whether the groups are independent (e.g., men versus women) or dependent (i.e., matched or paired, such as a before and after comparison).   The table below summarizes parameters that may be important to estimate in health-related studies.

 

 

 

 

 

Parameters Being Estimated

 

Continuous Variable

 

Dichotomous Variable

 

One Sample

 

mean

 

proportion or rate, e.g., prevalence, cumulative incidence, incidence rate

 

Two Independent Samples

 

difference in means

 

difference in proportions or rates, e.g., risk difference, rate difference, risk ratio, odds ratio, attributable proportion

 

Two Dependent, Matched Samples

 

mean difference

 

 

Confidence Intervals

There are two types of estimates for each population parameter: the point estimate and confidence interval (CI) estimate. For both continuous variables (e.g., population mean) and dichotomous variables (e.g., population proportion) one first computes the point estimate from a sample. Recall that sample means and sample proportions are unbiased estimates of the corresponding population parameters.

 

For both continuous and dichotomous variables, the confidence interval estimate (CI) is a range of likely values for the population parameter based on:

 

the point estimate, e.g., the sample mean

the investigator’s desired level of confidence (most commonly 95%, but any level between 0-100% can be selected)

and the sampling variability or the standard error of the point estimate.

Strictly speaking a 95% confidence interval means that if we were to take 100 different samples and compute a 95% confidence interval for each sample, then approximately 95 of the 100 confidence intervals will contain the true mean value (μ). In practice, however, we select one random sample and generate one confidence interval, which may or may not contain the true mean. The observed interval may over- or underestimate μ. Consequently, the 95% CI is the likely range of the true, unknown parameter. The confidence interval does not reflect the variability in the unknown parameter. Rather, it reflects the amount of random error in the sample and provides a range of values that are likely to include the unknown parameter. Another way of thinking about a confidence interval is that it is the range of likely values of the parameter (defined as the point estimate + margin of error) with a specified level of confidence (which is similar to a probability).

 

Suppose we want to generate a 95% confidence interval estimate for an unknown population mean. This means that there is a 95% probability that the confidence interval will contain the true population mean. Thus, P( [sample mean] – margin of error < μ < [sample mean] + margin of error) = 0.95.

 

The Central Limit Theorem introduced in the module on Probability stated that, for large samples, the distribution of the sample means is approximately normally distributed with a mean:

 

and a standard deviation (also called the standard error):

 

For the standard normal distribution,  P(-1.96 < Z < 1.96) = 0.95, i.e., there is a 95% probability that a standard normal variable, Z, will fall between -1.96 and 1.96. The Central Limit Theorem states that for large samples:

 

By substituting the expression on the right side of the equation:

 

Using algebra, we can rework this inequality such that the mean (μ) is the middle term, as shown below.

 

theN and finally

 

 

This last expression, then, provides the 95% confidence interval for the population mean, and this can also be expressed as:

 

 

Thus, the margin of error is 1.96 times the standard error (the standard deviation of the point estimate from the sample), and 1.96 reflects the fact that a 95% confidence level was selected. So, the general form of a confidence interval is:

 

point estimate + Z SE (point estimate)

 

where Z is the value from the standard normal distribution for the selected confidence level (e.g., for a 95% confidence level, Z=1.96).

 

In practice, we often do not know the value of the population standard deviation (σ). However, if the sample size is large (n > 30), then the sample standard deviations can be used to estimate the population standard deviation.

 

 

 

Table – Z-Scores for Commonly Used Confidence Intervals

 

Desired Confidence Interval

 

Z Score

 

90%

 

95%

 

99%

 

1.645

 

1.96

 

2.576

 

Lightbulb icon signifying an important tip or concept

 

In the health-related publications a 95% confidence interval is most often used, but this is an arbitrary value, and other confidence levels can be selected. Note that for a given sample, the 99% confidence interval would be wider than the 95% confidence interval, because it allows one to be more confident that the unknown population parameter is contained within the interval.

 

 

 

 

 

 

 

  1. 4 Answer the following with logical arguments:

         (20)

  • Why must we be required to deal with uncertainty in our decision, even when using statistical techniques?

 

 

began to study statistics with the notion that statistics is the study of information (retrieval) and a part of information is uncertainty which is taken for granted in our random world. Probably, it is the other way around; information is a part of uncertainty. Could this be the difference between Bayesian and frequentist?

 

The statistician’s task is to articulate the scientist’s uncertainties in the language of probability, and then to compute with the numbers found: cited from The Philosophy of Statistics by Dennis V. Lindley (2000). The Statistician, 49(3), pp.293-337. The article is a very good read (no theorems and their proofs. It does not begin with “Assume that …”).

 

The author starts the article by posing Statistics is the study of uncertainty and the rest is very agreeable as the quotes given above and below.

 

Because you do not know how to measure the distance to our moon, it does not follow that you do not believe in the existence of a distance to it. Scientists have spent much effort on the accurate determination of length because they were convinced that the concept of distance made sense in terms of krypton light. Similarly, it seems reasonable to attempt the measurement of uncertainty.

 

significance level – the probability of some aspect of the data, given H is true

probability – your probability of H, given the data

 

Many people, especially in scientific matters, think that their statements are objective, expressed through the probability, and are alarmed by the intrusion of subjectivity. Their alarm can be alleviated by considering reality and how that reality is reflected in the probability calculus.

 

I have often seen the stupid question posed ‘what is an appropriate prior for the variance σ2 of a normal (data) density?’ It is stupid because σ is just a Greek letter.

 

The statistician’s role is to articulate the client’s preferences in the form of a utility function, just as it is to express their uncertainty through probability,

 

where clients can be replaced with astronomers.

 

Upon accepting that statistics is the study of uncertainty, we’d better think about what this uncertainty is. Depending on the description of uncertainty, or the probability, the uncertainty quantification would change. As the author mentioned, statisticians formulate the clients’ uncertainty transcription, which I think astronomers should take the responsibility of. Nevertheless, I become to have a notion that astronomers do not care the subtleness in uncertainties. Generally, the probability model of this uncertainty is built on the independent property and at some point is approximated to Gaussian distribution. Yet, there are changes in this tradition and frequently I observe from arXiv:astro-ph that astronomers are utilizing Bayesian modeling for observed phenomenon and reflecting non gaussian uncertainty.

 

I heard that the effort on visualizing uncertainty is under progress. Prior to codifying, I wish those astronomers to be careful on the meaning of the uncertainty and the choice of statistics, i.e., modeling the uncertainty.

 

 

  • Is this possible that a false hypothesis will be accepted? How would you explain this?

 

 

A null hypothesis is either true or false. Unfortunately, we do not know which is the case, and we almost never will. It is important to realize that there is no probability that the null hypothesis is true or that it is false, because there is no element of chance. For example, if you are testing whether a potential mine has a greater gold concentration than that of a break-even mine, the null hypothesis that your potential mine has a gold concentration no greater than a break-even mine is either true or it is false; you just don’t know which. There is no probability associated with these two cases (in a frequentist sense) because the gold is already in the ground, and as a result there is no possibility for chance because everything is already set. All we have is our own uncertainty about the null hypothesis.

 

This lack of knowledge about the null hypothesis is why we need to perform a statistical test: we want to use our data to make an inference about the null hypothesis. Specifically, we need to decide if we are going to act as if the null hypothesis is true or act as if it is false. From our hypothesis test, we therefore choose either to accept or to reject the null hypothesis. If we accept the null hypothesis, we are stating that our data are consistent with the null hypothesis (recognizing that other hypotheses might also be consistent with the data). If we reject the null hypothesis, we are stating that our data are so unexpected that they are inconsistent with the null hypothesis.

 

Our decision will change our behavior. If we reject the null hypothesis, we will act as if the null hypothesis is false, even though we do not know if that is in fact false. If we accept the null hypothesis, we will act as if the null hypothesis is true, even though we have not demonstrated that it is in fact true. This is a critical point: regardless of the results of our statistical test, we will never know if the null hypothesis is true or false. In other words, we do not prove or disprove null hypotheses and we never will; we never show that null hypotheses are true or that they are false.

 

In short, we operate in a world where hypotheses are true or false, but we don’t know which. What we would like to do is perform statistical tests that allow us to make decisions (accept or reject), and we would like these to be correct decisions.

 

Significance and confidence

Keeping the probability of a type I error low is straightforward, because we choose our significance level (α). If we are especially concerned about making a type I error, we can set our significance level to be as small as we wish.

 

If the null hypothesis is true, we have a 1-α probability that we will make the correct decision and accept it. We call that probability (1-α) our confidence level. Confidence and significance sum to one because rejecting and accepting a null hypothesis are the only possible choices when the null hypothesis is true. Therefore, when we decrease significance, we increase our confidence. Although you might think you would always want confidence to be as high as possible, doing so comes at a high cost: we make type II errors more likely.

 

Beta and power

Keeping the probability of a type II error small is more complicated.

 

When the null hypothesis is false, β is the probability that we will make the wrong choice and accept it (a type II error). Beta is nearly always unknown, since knowing it requires knowing whether the null hypothesis is true or not. Specifically, calculating beta requires knowing the true value of the parameter, that is, the true hypothesis underlying our data. If we knew that, we wouldn’t need statistics.

 

If the null hypothesis is false, there is a 1-β probability that we will make the right choice and reject it. The probability that we will make the right choice when the null hypothesis is false is called statistical power. Power reflects our ability to reject false null hypotheses and detect new phenomena in the world. We must try to maximize power. Power is controlled by four factors:

 

Power increases with the size of the effect that we are trying to detect. For example, it is easier to detect a large difference in means than a small one. We cannot control effect size, because it is determined by the problem we are studying. The remaining three factors, however, are entirely under our control.

Sample size (n) has a major effect on power. Increasing sample size increases power. We should always strive to have as large of a sample size as money and time allow, as this is the best way to increase power.

Some statistical tests have greater power than other tests. In general, parametric tests (ones that assume a particular distribution, often a normal distribution) have greater power than nonparametric tests (those that do not assume a particular distribution).

Our significance level affects β. Increasing alpha (significance) will increase our power, but it also increases the risk of rejecting a true null hypothesis.

 

 

 

  1. 5 Block Enterprises, a manufacturer of chips for computers, is in the process of deciding whether to replace its current semiautomatic assembly line with a fully automated assembly line. Block has gathered some preliminary test data about hourly chip production, which is summarized in the following table, and it would like to know whether it should upgrade its assembly line. State (and test at a = 0.02) appropriate hypotheses to help Block decide. (20)

                                                     x                      s                       n

         Semiautomatic line                   198                  32                    150

         Automatic line                          206                  29                    200

 

 

Initially, machine tool automation started with the development of numerical control in 1950s. In less than 50 years, it is amazing that today’s manufacturing plants are completely automated. However, establishment of these plants gave relatively a few varieties of product. At first we define what do we mean by a manufacturing plant? Here, we are considering a several categories of manufacturing cor productions for the various manufacturing plants. Manufacturing can be considered in three broad areas: (1) continuous process production,

 

mans production, and

 

(i) job-shop production

 

Among these three, mass production and job-shop pruduction can be categorized as

 

discrete-item production.

 

Continuous Process Production Such type of product flows continuously in the manufacturing system, eg petroleum cement, steel rolling, petrochemical and paper production etc.

 

Mass Production

 

continuous process production for discrete products. That’s why; mass

 

production has realized enormous benefits from automation and mechanization

 

Equipement used here are only applicable for small group of similar products. it includes the production of discrete unit at very high rate of speed, Discrete item production is used for goods such as automobiles, refrigerators, televisions. electronic component and so on. Mass production contains the character of

 

Job Shop Production

 

A manufacturing facility that produces a large number of different discrete items and requires different sequences among the production equipments is called job shop. Scheduling and routine problems are the essential features of job shop. As a result automation has at best been restricted to individual component of job shop. But there have been few attempts in the field of total automation.

 

Muans Enterprise, a manufacturer of RAM chips for computers, is in the process of deciding whether to replace its current semiautomated assembly line with a fully automated assembly line Block has gathered some preliminary test data about hourly chip production, which is summarized in the table below, and would like to know if it should upgrade its assembly line. State and test at 10%) appropriate hypothesis to help Muans decide. Employee Semiautomatic line Automatic line 198 32 150 206 29 200

 

Leave a Reply

Your email address will not be published. Required fields are marked *