Six Sigma Terminologies & Definitions – MySixSigmaTrainer.com

Six Sigma TermSix Sigma Definition
1-Sample sign testTests the probability of sample median being equal to hypothesized value.
Accelerated TestingAccelerated testing allows designers to make predictions about the life of a product by developing a model that correlates reliability under accelerated conditions to reliability under normal conditions. 
AccuracyAccuracy refers to the variation between a measurement and what actually exists. It is the difference between an individual’s average measurements and that of a known standard, or accepted “truth.”
Alpha riskAlpha risk is defined as the risk of accepting the alternate hypothesis when, in fact, the null hypothesis is true; in other words, stating a difference exists where actually there is none. Alpha risk is stated in terms of probability (such as 0.05 or 5%). The acceptable level of alpha risk is determined by an organization or individual and is based on the nature of the decision being made. For decisions with high consequences (such as those involving risk to human life), an alpha risk of less than 1% would be expected. If the decision involves minimal time or money, an alpha risk of 10% may be appropriate. In general, an alpha risk of 5% is considered the norm in decision making. Sometimes alpha risk is expressed as its inverse, which is confidence level. In other words, an alpha risk of 5% also could be expressed as a 95% confidence level.
Alternative hypothesis (Ha)The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not due to chance or sampling error. The alternate hypothesis is the opposite of the null hypothesis  (P < 0.05).  A dependency exists between two or more factors
Analysis of variance (ANOVA)Analysis of variance is a statistical technique for analyzing data that tests for a difference between two or more means. See the tool 1-Way ANOVA.
Anderson-Darling Normality TestP-value < 0.05 = not normal.  
Appraiser Variation (AV)variation between sample groups.
Attribute Datasee discrete data
B-LifeA common way to express values of the cumulative density function – B10 refers to the time at which 10% of the parts are expected to have failed. 
Bar chartA bar chart is a graphical comparison of several quantities in which the lengths of the horizontal or vertical bars represent the relative magnitude of the values.
Bathtub CurvePlot of Failure Rate over Time.  Three regions:  Infant mortality, Random failure (useful life), and Wearout
BenchmarkingBenchmarking is an improvement tool whereby a company measures its performance or process against other companies’ best practices, determines how those companies achieved their performance levels, and uses the information to improve its own performance. See the tool Benchmarking.
Beta riskBeta risk is defined as the risk of accepting the null hypothesis when, in fact, the alternate hypothesis is true. In other words, stating no difference exists when there is an actual difference. A statistical test should be capable of detecting differences that are important to you, and beta risk is the probability (such as 0.10 or 10%) that it will not. Beta risk is determined by an organization or individual and is based on the nature of the decision being made. Beta risk depends on the magnitude of the difference between sample means and is managed by increasing test sample size. In general, a beta risk of 10% is considered acceptable in decision making.
BiasBias in a sample is the presence or influence of any factor that causes the population or process being sampled to appear different from what it actually is. Bias is introduced into a sample when data is collected without regard to key factors that may influence the population or process.
BlockingBlocking neutralizes background variables that can not be eliminated by randomizing. It does so by spreading them across the experiment
Bounding (8 steps)1. Identify the customer, 2. Define customer expectations, 3. Clearly specify your deliverables vs. Their expectations, 4. Identify CTQ’s for deliverables, 5. Map process, 6. Define where in process CTQ’s can be most seriously affected, 7. Evaluate which CTQ’s have greatest opportunity for impact, 8. Define project to improve selected CTQ’s
BoxplotA box plot, also known as a box and whisker diagram, is a basic graphing tool that displays centering, spread, and distribution of a continuous data set
CAP Includes/ExcludesCAP Includes/Excludes is a tool that can help your team define the boundaries of your project, facilitate discussion about issues related to your project scope, and challenge you to agree on what is included and excluded within the scope of your work. See the tool CAP Includes/Excludes.
CAP Stakeholder AnalysisCAP Stakeholder Analysis is a tool to identify and enlist support from stakeholders. It provides a visual means of identifying stakeholder support so that you can develop an action plan for your project. See the tool CAP Stakeholder Analysis.
Capability AnalysisCapability analysis is a MinitabTM tool that visually compares actual process performance to the performance standards. See the tool Capability Analysis.
CauseA factor (X) that has an impact on a response variable (Y); a source of variation in a process or product.
Cause and Effect DiagramA cause and effect diagram is a visual tool used to logically organize possible causes for a specific problem or effect by graphically displaying them in increasing detail. It helps to identify root causes and ensures common understanding of the causes that lead to the problem. Because of its fishbone shape, it is sometimes called a “fishbone diagram.” See the tool Cause and Effect Diagram.
CDF (Cumulative Distribution Function)The Cumulative Distribution Function (CDF) represents the probability that the system fails at some time prior to t.  It is the integral of the PDF evaluated from 0 to t.
Censoring (INTERVAL)You only know that a failure occured BETWEEN two different times.
Censoring (LEFT)You only know that a failure occured BEFORE a particular time.  Example:  The gear broke sometime before 5000 hours.
Censoring (RIGHT)You only know that a failure occured AFTER a particular time.  Example:  The gear broke sometime after 3000 hours.
CenterThe center of a process is the average value of its data. It is equivalent to the mean and is one measure of the central tendency.
Center pointsA center point is a run performed with all factors set halfway between their low and high levels. Each factor must be continuous to have a logical halfway point. For example, there are no logical center points for the factors vendor, machine, or location (such as city); however, there are logical center points for the factors temperature, speed, and length.
Central Limit TheoremThe central limit theorem states that given a distribution with a mean m and variance s2, the sampling distribution of the mean appraches a normal distribution with a mean and variance/N as N, the sample size, increases
CharacteristicA characteristic is a definable or measurable feature of a process, product, or variable.
Charter (5 steps)1, Develop business case, 2. Define problem and goal, 3. Determine project scope, 4. Select team and define roles, 5. Set project milestones
Chi Square testA chi square test, also called “test of association,” is a statistical test of association between discrete variables. It is based on a mathematical comparison of the number of observed counts with the number of expected counts to determine if there is a difference in output counts based on the input category. See the tool Chi Square-Test of Independence.  Used with Defects data (counts) & defectives data (how many good or bad).  Critical Chi-Square is Chi-squared value where p=.05.
Coefficient of variationThe ratio of StdDev to the Mean, expressed in % terms [not often seen, but useful]
Common cause variabilityCommon cause variability is a source of variation caused by unknown factors that result in a steady but random distribution of output around the average of the data. Common cause variation is a measure of the process’s potential, or how well the process can perform when special cause variation is removed. Therefore, it is a measure of the process technology. Common cause variation is also called random variation, noise, noncontrollable variation, within-group variation, or inherent variation.  Example:  many X’s with a small impact.
Confidence band (or interval)Measurement of the certainty of the shape of the fitted regression line.  A 95% confidence band implies a 95% chance that the true regression line fits within the confidence bands.  Measurement of certainty.
ConfoundingFactors or interactions are said to be confounded when the effect of one factor is combined with that of another. In other words, their effects can not be analyzed independently.
Consumers RiskConcluding something is bad when it is actually good (TYPE II Error)
Continuous DataContinuous data is information that can be measured on a continuum or scale. Continuous data can have almost any numeric value and can be meaningfully subdivided into finer and finer increments, depending upon the precision of the measurement system. Examples of continuous data include measurements of time, temperature, weight, and size. For example, time can be measured in days, hours, minutes, seconds, and in even smaller units. Continuous data is also called quantitative data.
Control / Specification LimitsControl limits define the area three standard deviations on either side of the centerline, or mean, of data plotted on a control chart.  Do not confuse control limits with specification limits. Control limits reflect the expected variation in the data and are based on the distribution of the data points. Minitab™ calculates control limits using collected data. Specification limits are established based on customer or regulatory requirements. Specification limits change only if the customer or regulatory body so requests.
CorrelationCorrelation is the degree or extent of the relationship between two variables. If the value of one variable increases when the value of the other increases, they are said to be positively correlated. If the value of one variable decreases when the value of the other decreases, they are said to be negatively correlated. The degree of linear association between two variables is quantified by the correlation coefficient
Correlation coefficient (r)The correlation coefficient quantifies the degree of linear association between two variables. It is typically denoted by r and will have a value ranging between negative 1 and positive 1.
Critical elementA critical element is an X that does not necessarily have different levels of a specific scale but can be configured according to a variety of independent alternatives. For example, a critical element may be the routing path for an incoming call or an item request form in an order-taking process. In these cases the critical element must be specified correctly before you can create a viable solution; however, numerous alternatives may be considered as possible solutions.
CTQCTQs (stands for Critical to Quality) are the key measurable characteristics of a product or process whose performance standards, or specification limits, must be met in order to satisfy the customer. They align improvement or design efforts with critical issues that affect customer satisfaction. CTQs are defined early in any Six Sigma project, based on Voice of the Customer (VOC) data.
Cumulative Distribution Function (CDF)The Cumulative Distribution Function (CDF) represents the probability that the product fails at some time prior to t.  It is the integral of the PDF evaluated from 0 to t. 
Cycle timeCycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work waits to be processed.
DashboardA dashboard is a tool used for collecting and reporting information about vital customer requirements and your business’s performance for key customers. Dashboards provide a quick summary of process performance.
DataData is factual information used as a basis for reasoning, discussion, or calculation; often this term refers to quantitative information
DefectA defect is any nonconformity in a product or process; it is any event that does not meet the performance standards of a Y.
DefectiveThe word defective describes an entire unit that fails to meet acceptance criteria, regardless of the number of defects within the unit. A unit may be defective because of one or more defects.
Descriptive statisticsDescriptive statistics is a method of statistical analysis of numeric data, discrete or continuous, that provides information about centering, spread, and normality. Results of the analysis can be in tabular or graphic format.
Design Risk AssessmentA design risk assessment is the act of determining potential risk in a design process, either in a concept design or a detailed design. It provides a broader evaluation of your design beyond just CTQs, and will enable you to eliminate possible failures and reduce the impact of potential failures. This ensures a rigorous, systematic examination in the reliability of the design and allows you to capture system-level risk
Detectable Effect SizeWhen you are deciding what factors and interactions you want to get information about, you also need to determine the smallest effect you will consider significant enough to improve your process. This minimum size is known as the detectable effect size, or DES. Large effects are easier to detect than small effects. A design of experiment compares the total variability in the experiment to the variation caused by a factor. The smaller the effect you are interested in, the more runs you will need to overcome the variability in your experimentation.  RULE:  Don’t run an experiment if you cannot detect the detectable effect size
DetectionHow likely will the current system detect the Cause or Failure Mode if it occurs?
DF (degrees of freedom)Equal to:  (#rows – 1)(#cols – 1)
Discrete DataDiscrete data is information that can be categorized into a classification. Discrete data is based on counts. Only a finite number of values is possible, and the values cannot be subdivided meaningfully. For example, the number of parts damaged in shipment produces discrete data because parts are either damaged or not damaged.
DistributionDistribution refers to the behavior of a process described by plotting the number of times a variable displays a specific value or range of values rather than by plotting the value itself.
DMADVDMADV is GE Company’s data-driven quality strategy for designing products and processes, and it is an integral part of GE’s Six Sigma Quality Initiative. DMADV consists of five interconnected phases: Define, Measure, Analyze, Design, and Verify.
DMAICDMAIC refers to General Electric’s data-driven quality strategy for improving processes, and is an integral part of the company’s Six Sigma Quality Initiative. DMAIC is an acronym for five interconnected phases: Define, Measure, Analyze, Improve, and Control.
DOEA design of experiment is a structured, organized method for determining the relationship between factors (Xs) affecting a process and the output of that process.
DPMODefects per million opportunities (DPMO) is the number of defects observed during a standard production run divided by the number of opportunities to make a defect during that run, multiplied by one million.
DPODefects per opportunity (DPO) represents total defects divided by total opportunities. DPO is a preliminary calculation to help you calculate DPMO (defects per million opportunities). Multiply DPO by one million to calculate DPMO.
DPUDefects per unit (DPU) represents the number of defects divided by the number of products.
Dunnett’s(1-way ANOVA): Check to obtain a two-sided confidence interval for the difference between each treatment mean and a control mean. Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. 
EffectAn effect is that which is produced by a cause; the impact a factor (X) has on a response variable (Y).
EntitlementAs good as a process can get without capital investment
Equipment Variation (EV)variation within a sample group.
ErrorError, also called residual error, refers to variation in observations made under identical test conditions, or the amount of variation that can not be attributed to the variables included in the experiment.
Error (type I)Error that concludes that someone is guilty, when in fact, they really are not.  (Ho true, but I rejected it–concluded Ha)  ALPHA
Error (type II)Error that concludes that someone is not guilty, when in fact, they really are.  (Ha true, but I concluded Ho).  BETA
Experimental ErrorThe amount of variation that cannot be attributed to the variables included in the experiment
Exponential DistributionIf the failure rate is constant over time, then the product follows the exponential distribution.  This is often used for electronic components.  Also can be used for time dissociated or “random” event type failures, acts of god: lightning strikes, jet engine bird ingestions 
FactorA factor is an independent variable; an X.
Failure Mode and Effect AnalysisFailure mode and effects analysis (FMEA) is a disciplined approach used to identify possible failures of a product or service and then determine the frequency and impact of the failure. See the tool Failure Mode and Effects Analysis.
Failure RateThe ratio of the number of failures within a sample to the cumulative operating time (total operating time of the entire population.) 
Fisher’s (1-way ANOVA): Check to obtain confidence intervals for all pairwise differences between level means using Fisher’s LSD procedure. Specify an individual rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05.
FitsPredicted values of “Y” calculated using the regression equation for each value of “X”
Fitted valueA fitted value is the Y output value that is predicted by a regression equation.
Fractional factorial DOEA fractional factorial design of experiment (DOE) includes selected combinations of factors and levels. It is a carefully prescribed and representative subset of a full factorial design. A fractional factorial DOE is useful when the number of potential factors is relatively large because they reduce the total number of runs required. By reducing the number of runs, a fractional factorial DOE will not be able to evaluate the impact of some of the factors independently. In general, higher-order interactions are confounded with main effects or lower-order interactions. Because higher order interactions are rare, usually you can assume that their effect is minimal and that the observed effect is caused by the main effect or lower-level interaction.
Frequency plotA frequency plot is a graphical display of how often data values occur.
Full factorial DOEA full factorial design of experiment (DOE) measures the response of every possible combination of factors and factor levels. These responses are analyzed to provide information about every main effect and every interaction effect. A full factorial DOE is practical when fewer than five factors are being investigated. Testing all combinations of factor levels becomes too expensive and time-consuming with five or more factors.
F-value (ANOVA)Measurement of distance between individual distributions.  As F goes up, P goes down (i.e., more confidence in there being a difference between two means).  To calculate:  (Mean Square of X / Mean Square of Error)
Gage R&RGage R&R, which stands for gage repeatability and reproducibility, is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement. See Gage R&R tools.
Gannt ChartA Gantt chart is a visual project planning device used for production scheduling. A Gantt chart graphically displays time needed to complete tasks.
Goodman-Kruskal GammaTerm used to describe % variation explained by X
GRPIGRPI stands for four critical and interrelated aspects of teamwork: goals, roles, processes, and interpersonal relationships, and it is a tool used to assess them. See the tool GRPI.
Hazard RateThe hazard rate h(t) is the probability that a part which has already survived up to time t will fail in the next instant.  Hazard rate is also called the instantaneous failure rate.   
HistogramA histogram is a basic graphing tool that displays the relative frequency or occurrence of continuous data values showing which values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indicates whether there are any outliers. See the tool Histogram.
Homegeneity of varianceHomogeneity of variance is a test used to determine if the variances of two or more samples are different. See the tool Homogeneity of Variance.
Hypothesis testingHypothesis testing refers to the process of using statistical analysis to determine if the observed differences between two or more samples are due to random chance (as stated in the null hypothesis) or to true differences in the samples (as stated in the alternate hypothesis). A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not the result of chance or an error in sampling. Hypothesis testing is the process of using a variety of statistical tools to analyze data and, ultimately, to accept or reject the null hypothesis. From a practical point of view, finding statistical evidence that the null hypothesis is false allows you to reject the null hypothesis and accept the alternate hypothesis.
I-MR ChartAn I-MR chart, or individual and moving range chart, is a graphical tool that displays process variation over time. It signals when a process may be going out of control and shows where to look for sources of special cause variation. See the tool I-MR Control.
In controlIn control refers to a process unaffected by special causes. A process that is in control is affected only by common causes. A process that is out of control is affected by special causes in addition to the common causes affecting the mean and/or variance of a process.
Independent variableAn independent variable is an input or process variable (X) that can be set directly to achieve a desired output
Intangible benefitsIntangible benefits, also called soft benefits, are the gains attributable to your improvement project that are not reportable for formal accounting purposes. These benefits are not included in the financial calculations because they are nonmonetary or are difficult to attribute directly to quality. Examples of intangible benefits include cost avoidance, customer satisfaction and retention, and increased employee morale.
InteractionAn interaction occurs when the response achieved by one factor depends on the level of the other factor.  On interaction plot, when lines are not parallel, there’s an interaction.
Interrelationship digraphAn interrelationship digraph is a visual display that maps out the cause and effect links among complex, multivariable problems or desired outcomes.
IQRIntraquartile range (from box plot) representing range between 25th and 75th quartile.
Kano AnalysisKano analysis is a quality measurement used to prioritize customer requirements.
Kruskal-WallisKruskal-Wallis performs a hypothesis test of the equality of population medians for a one-way design (two or more populations). This test is a generalization of the procedure used by the Mann-Whitney test and, like Mood’s median test, offers a nonparametric alternative to the one-way analysis of variance. The Kruskal-Wallis test looks for differences among the populations medians.  The Kruskal-Wallis test is more powerful (the confidence interval is narrower, on average) than Mood’s median test for analyzing data from many distributions, including data from the normal distribution, but is less robust against outliers.
KurtosisKurtosis is a measure of how peaked or flat a curve’s distribution is.
L1 SpreadsheetAn L1 spreadsheet calculates defects per million opportunities (DPMO) and a process Z value for discrete data.
L2 SpreadsheetAn L2 spreadsheet calculates the short-term and long-term Z values for continuous data sets.
Leptokurtic DistributionA leptokurtic distribution is symmetrical in shape, similar to a normal distribution, but the center peak is much higher; that is, there is a higher frequency of values near the mean. In addition, a leptokurtic distribution has a higher frequency of data in the tail area.
LevelsLevels are the different settings a factor can have. For example, if you are trying to determine how the response (speed of data transmittal) is affected by the factor (connection type), you would need to set the factor at different levels (modem and LAN) then measure the change in response.
LinearityLinearity is the variation between a known standard, or “truth,” across the low and high end of the gage. It is the difference between an individual’s measurements and that of a known standard or truth over the full range of expected values.
LSLA lower specification limit is a value above which performance of a product or process is acceptable. This is also known as a lower spec limit or LSL.
Lurking variableA lurking variable is an unknown, uncontrolled variable that influences the output of an experiment.
Main EffectA main effect is a measurement of the average change in the output when a factor is changed from its low level to its high level.  It is calculated as the average output when a factor is at its high level minus the average output when the factor is at its low level.
Mallows Statistic (C-p)Statistic within Regression–>Best Fits which is used as a measure of bias (i.e., when predicted is different than truth).  Should equal (#vars + 1)
Mann-WhitneyMann-Whitney  performs a hypothesis test of the equality of two population medians and calculates the corresponding point estimate and confidence interval. Use this test as a nonparametric alternative to the two-sample t-test.
MeanThe mean is the average data point value within a data set. To calculate the mean, add all of the individual data points then divide that figure by the total number of data points.
Mean availabilityThe probability that a product will perform its intended function at any time, when used under stated operating conditions.  
Mean Time Between Failure (MTBF)For a repairable item, the ratio of the cumulative operating time to the number of failures for that item.   Metric most-widely used during the useful life period, when hazard rate is constant.
Mean Time To Failure (MTTF)For non-repairable items, the ratio of the cumulative operating time to the number of failures for a group of items.   Metric most-widely used during the useful life period, when hazard rate is constant.
Measurement system analysisMeasurement system analysis is a mathematical method of determining how much the variation within the measurement process contributes to overall process variability.
MedianThe median is the middle point of a data set; 50% of the values are below this point, and 50% are above this point.
Mid-rangehalf way between Min and Max—note, not necessarily same as Median
ModeThe most often occurring value in the data set
Moods MedianMood’s median test can be used to test the equality of medians from two or more populations and, like the Kruskal-Wallis Test, provides an nonparametric alternative to the one-way analysis of variance. Mood’s median test is sometimes called a median test or sign scores test. Mood’s Median Test tests:  
     H0: the population medians are all equal  versus  H1: the medians are not all equal
An assumption of Mood’s median test is that the data from each population are independent random samples and the population distributions have the same shape. Mood’s median test is robust against outliers and errors in data and is particularly appropriate in the preliminary stages of analysis. Mood’s Median test is more robust than is the Kruskal-Wallis test against outliers, but is less powerful for data from many distributions, including the normal.
MTBF (Mean Time Between Failure)For a repairable item, the ratio of the cumulative operating time to the number of failures for that item.
MTTF (Mean Time To Failure)For non-repairable items, the ratio of the cumulative operating time to the number of failures for a group of items.
MulticolinearityMulticolinearity is the degree of correlation between Xs. It is an important consideration when using multiple regression on data that has been collected without the aid of a design of experiment (DOE). A high degree of multicolinearity may lead to regression coefficients that are too large or are headed in the wrong direction from that you had expected based on your knowledge of the process. High correlations between Xs also may result in a large p-value for an X that changes when the intercorrelated X is dropped from the equation. The variance inflation factor provides a measure of the degree of multicolinearity.
Multiple regressionMultiple regression is a method of determining the relationship between a continuous process output (Y) and several factors (Xs).
Multi-vari chartA multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time. See the tool Multi-Vari Chart.
NoiseProcess input that consistently causes variation in the output measurement that is random and expected and, therefore, not controlled is called noise. Noise also is referred to as white noise, random variation, common cause variation, noncontrollable variation, and within-group variation.
NominalIt refers to the value that you estimate in a design process that approximate your real CTQ (Y) target value based on the design element capacity. Nominals are usually referred to as point estimate and related to y-hat model.
Non-parametricSet of tools that avoids assuming a particular distribution.
Normal DistributionNormal distribution is the spread of information (such as product performance or demographics) where the most frequently occurring value is in the middle of the range and other probabilities tail off symmetrically in both directions. Normal distribution is graphically categorized by a bell-shaped curve, also known as a Gaussian distribution. For normally distributed data, the mean and median are very close and may be identical.
Normal probabilityUsed to check whether observations follow a normal distribution.  P > 0.05 = data is normal
Normality testA normality test is a statistical process used to determine if a sample or any group of data fits a standard normal distribution. A normality test can be performed mathematically or graphically. See the tool Normality Test.
Null Hypothesis (Ho)A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. According to the null hypothesis, any observed difference in samples is due to chance or sampling error. It is written mathematically as follows:  H0: m1 = m2         H0: s1 = s2.  Defines what you expect to observe.  (e.g., all means are same or independent).  (P > 0.05)
OccurrenceHow likely is the Cause of the Failure Mode to occur?
Operational definitionAn operational definition is a precise description that tells how to get a value for the characteristic (CTQ) you are trying to measure. It includes  What Something Is” and “How to Measure It” 
OpportunityAn opportunity is anything that you inspect, measure, or test on a unit that provides a chance of allowing a defect.
OutlierAn outlier is a data point that is located far from the rest of the data. Given a mean and standard deviation, a statistical distribution expects data points to fall within a specific range. Those that do not are called outliers and should be investigated to ensure that the data is correct. If the data is correct, you have witnessed a rare event or your process has changed. In either case, you need to understand what caused the outliers to occur.
PDF (Probability Density Function)The Probability Density Function (PDF) is the distribution f(t) of times to failure.  The value of f(t) is the probability of the system failing precisely at time t. 
Percent of tolerancePercent of tolerance is calculated by taking the measurement error of interest, such as repeatability and/or reproducibility, dividing by the total tolerance range, then multiplying the result by 100 to express the result as a percentage.
Performance standardA Performance Standard is the requirement(s) or specification(s) imposed by the customer on a specific CTQ.  It answers the questions: What does the customer want?  What is a good product/process?  What is a defect?
Plackett-Burman Designs (DOE)DOE design used when it is too costly to perform the 2^k screening design.  Caution must be used here due to the probable loss of knowing where two factor interactions are confounded.
Platykurtic DistributionA platykurtic distribution is one in which most of the values share about the same frequency of occurrence. As a result, the curve is very flat, or plateau-like. Uniform distributions are platykurtic.
Pooled Standard DeviationPooled standard deviation (Sp) is the standard deviation remaining after removing the effect of special cause variation-such as geographic location or time of year. It is the average variation of your subgroups.
PrecisionPrecision is an estimate of the overall variation in the measurement system, including repeatability and reproducibility.
Prediction Band (or interval)Measurement of the certainty of the scatter about a certain regression line.  A 95% prediction band indicates that, in general, 95% of the points will be contained within the bands.
ProbabilityProbability refers to the chance of something happening, or the fraction of occurrences over a large number of trials. Probability can range from 0 (no chance) to 1 (full certainty).
Probability of DefectProbability of defect is the statistical chance that a product or process will not meet performance specifications or lie within the defined upper and lower specification limits. It is the ratio of expected defects to the total output and is expressed as p(d). Process capability can be determined from the probability of defect.
Process CapabilityProcess capability refers to the ability of a process to produce a defect-free product or service. Various indicators are used-some address overall performance, some address potential performance.
Process Control SystemEnsures that process performance always meets customers requirements;  Defines course of action if process performance does not meet performance standards
Producers RiskConcluding something is good when it is actually bad (TYPE I Error)
Production Reliability Acceptance Testing (PRAT)To ensure that variation in materials, parts, & processes related to move from prototypes to full production does not affect product reliability 
p-valueThe p-value represents the probability of concluding (incorrectly) that there is a difference in your samples when no true difference exists. It is a statistic calculated by comparing the distribution of given sample data and an expected distribution (normal, F, t, etc.) and is dependent upon the statistical test being performed. For example, if two samples are being compared in a t-test, a p-value of 0.05 means that there is only 5% chance of arriving at the calculated t value if the samples were not different (from the same population). In other words, a p-value of 0.05 means there is only a 5% chance that you would be wrong in concluding the populations are different.  P-value < 0.05 = safe to conclude there’s a difference.  P-value = risk of wasting time investigating further.
Q125th percentile (from box plot)
Q375th percentile (from box plot)
Qualitative dataDiscrete data
Quality Function DeploymentQuality function deployment (QFD) is a structured methodology used to identify customers’ requirements and translate them into key process deliverables. In Six Sigma, QFD helps you focus on ways to improve your process or product to meet customers’ expectations. See the tool Quality Function Deployment.
Quality PlanDocumented plan to ensure each product characteristic or process requirement stays in conformance.  Quality plan also describes the flow of the process and stardard operating procedures
Quantitative dataContinuous data
Radar ChartA radar chart is a graphical display of the differences between actual and ideal performance. It is useful for defining performance and identifying strengths and weaknesses.
RandomizationRunning experiments in a random order, not the standard order in the test layout.  Helps to eliminate effect of “lurking variables”, uncontrolled factors whihc might vary over the length of the experiment.
Rational SubgroupA rational subgroup is a subset of data defined by a specific factor such as a stratifying factor or a time period. Rational subgrouping identifies and separates special cause variation (variation between subgroups caused by specific, identifiable factors) from common cause variation (unexplained, random variation caused by factors that cannot be pinpointed or controlled). A rational subgroup should exhibit only common cause variation.
Regression analysisRegression analysis is a method of analysis that enables you to quantify the relationship between two or more variables (X) and (Y) by fitting a line or plane through all the points such that they are evenly distributed about the line or plane.  Visually, the best-fit line is represented on a scatter plot by a line or plane. Mathematically, the line or plane is represented by a formula that is referred to as the regression equation. The regression equation is used to model process performance (Y) based on a given value or values of the process variable (X).
ReliabilityThe probability that an item can perform its intended function for a specified interval  under  stated condition without failure.  The reliability of a product is the probability that it not fail before time t.  It is therefore the complement of the CDF
Reliability Demonstration TestingPurpose:  To demonstrate the product’s ability to fulfill reliability, availability & design requirements under realistic conditions. 
Reliability Growth TestingPurpose:  To determine a product’s physical limitations, functional capabilities and inherent failure mechanisms.  Emphasis is on discovering & “eliminating” failure modes.
Repeatability (EV)Repeatability is the variation in measurements obtained when one person takes multiple measurements using the same techniques on the same parts or items.  Repeatability is often expressed in terms of “pure error”.  Repetition also helps determine precision of measurement.
ReplicatesNumber of times you ran each corner (i.e., DOE corner).  Ex.  2 replicates means you ran one corner twice.
ReplicationReplication occurs when an experimental treatment is set up and conducted more than once. If you collect two data points at each treatment, you have two replications. In general, plan on making between two and five replications for each treatment. Replicating an experiment allows you to estimate the residual or experimental error. This is the variation from sources other than the changes in factor levels. A replication is not two measurements of the same data point but a measurement of two data points under the same treatment conditions. For example, to make a replication, you would not have two persons time the response of a call from the northeast region during the night shift. Instead, you would time two calls into the northeast region’s help desk during the night shift.
ReproducibilityReproducibility is the variation in average measurements obtained when two or more people measure the same parts or items using the same measuring technique.
ResidualA residual is the difference between the actual Y output value and the Y output value predicted by the regression equation. The residuals in a regression model can be analyzed to reveal inadequacies in the model.  Also called “errors”
ResolutionResolution is a measure of the degree of confounding among effects. Roman numerals are used to denote resolution. The resolution of your design defines the amount of information that can be provided by the design of experiment. As with a computer screen, the higher the resolution of your design, the more detailed the information you will see. The lowest resolution you can have is resolution III.
Risk Priority Number (RPN)A numerical calculation of the relative risk of a particular Failure Mode.  RPN = SEV x OCC x DET.  This number is used to place priority on which items need additional quality planning.
Robust ProcessA robust process is one that is operating at 6 sigma and is therefore resistant to defects. Robust processes exhibit very good short-term process capability (high short-term Z values) and a small Z shift value. In a robust process, the critical elements usually have been designed to prevent or eliminate opportunities for defects; this effort ensures sustainability of the process. Continual monitoring of robust processes is not usually needed, although you may wish to set up periodic audits as a safeguard.
Rolled Throughput YieldRolled throughput yield is the probability that a single unit can pass through a series of process steps free of defects.
R-squaredA mathematical term describing how much variation is being explained by the X.  FORMULA:  R-sq = SS(regression) / SS(total)
R-SquaredAnswers question of how much of total variation is explained by X.  Caution:  R-sq increases as number of data points increases.  Pg. 13 analyze
R-squared (adj)Unlike R-squared, R-squared adjusted takes into account the number of X’s and the number of data points.  FORMULA:  R-sq (adj) = 1 – [(SS(regression)/DF(regression)) / (SS(total)/DF(total))]
R-Squared adjustedTakes into account the number of X’s and the number of data points…also answers:  how much of total variation is explained by X.
SampleA portion or subset of units taken from the population whose characteristics are actually measured
Sample Size Calc.The sample size calculator is a spreadsheet tool used to determine the number of data points, or sample size, needed to estimate the properties of a population. See the tool Sample Size Calculator.
SamplingSampling is the practice of gathering a subset of the total data available from a process or a population.
scatter plotA scatter plot, also called a scatter diagram or a scattergram, is a basic graphic tool that illustrates the relationship between two variables. The dots on the scatter plot represent data points. See the tool Scatter Plot.
ScorecardA scorecard is an evaluation device, usually in the form of a questionnaire, that specifies the criteria your customers will use to rate your business’s performance in satisfying their requirements.
Screening DOEA screening design of experiment (DOE) is a specific type of a fractional factorial DOE. A screening design is a resolution III design, which minimizes the number of runs required in an experiment. A screening DOE is practical when you can assume that all interactions are negligible compared to main effects. Use a screening DOE when your experiment contains five or more factors. Once you have screened out the unimportant factors, you may want to perform a fractional or full-fractional DOE.  EXAMPLE:  Number of screening runs required for a 7-factor (2-level) design = 7 + 1 = 8 runs.  This is similiar to a 3-full factorial design.
SegmentationSegmentation is a process used to divide a large group into smaller, logical categories for analysis. Some commonly segmented entities are customers, data sets, or markets.
SeverityHow significant is the impact of the Effect to the customer (internal or external)?
S-hat ModelIt describes the relationship between output variance and input nominals
SigmaThe Greek letter s (sigma) refers to the standard deviation of a population. Sigma, or standard deviation, is used as a scaling factor to convert upper and lower specification limits to Z. Therefore, a process with three standard deviations between its mean and a spec limit would have a Z value of 3 and commonly would be referred to as a 3 sigma process.
Simple Linear RegressionSimple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation such as Y = b + mX
SIPOCSIPOC stands for suppliers, inputs, process, output, and customers. You obtain inputs from suppliers, add value through your process, and provide an output that meets or exceeds your customer’s requirements.
SkewnessMost often, the median is used as a measure of central tendency when data sets are skewed. The metric that indicates the degree of asymmetry is called, simply, skewness. Skewness often results in situations when a natural boundary is present. Normal distributions will have a skewness value of approximately zero. Right-skewed distributions will have a positive skewness value; left-skewed distributions will have a negative skewness value. Typically, the skewness value will range from negative 3 to positive 3. Two examples of skewed data sets are salaries within an organization and monthly prices of homes for sale in a particular area.
SpanA measure of variation for “S-shaped” fulfillment Y’s
Special cause variabilityUnlike common cause variability, special cause variation is caused by known factors that result in a non-random distribution of output.  Also referred to as “exceptional” or “assignable” variation.  Example:  Few X’s with big impact.
Specification limitsLimits that are external to the process.  Could represent engineering requirements to satisfy a customer CTQ.
SpreadThe spread of a process represents how far data points are distributed away from the mean, or center. Standard deviation is a measure of spread.
SS Process ReportThe Six Sigma process report is a Minitab™ tool that calculates process capability and provides visuals of process performance. See the tool Six Sigma Process Report.
SS Product ReportThe Six Sigma product report is a Minitab™ tool that calculates the DPMO and short-term capability of your process. See the tool Six Sigma Product Report.
StabilityStability represents variation due to elapsed time. It is the difference between an individual’s measurements taken of the same parts after an extended period of time using the same techniques.
Standard Deviation (s)Standard deviation is a measure of the spread of data in relation to the mean. It is the most common measure of the variability of a set of data.  If the standard deviation is based on a sampling, it is referred to as “s.” If the entire data population is used, standard deviation is represented by the Greek letter sigma (s).  The standard deviation (together with the mean) is used to measure the degree to which the product or process falls within specifications. The lower the standard deviation, the more likely the product or service falls within spec.  When the standard deviation is calculated in relation to the mean of all the data points, the result is an overall standard deviation. When the standard deviation is calculated in relation to the means of subgroups, the result is a pooled standard deviation. Together with the mean, both overall and pooled standard deviations can help you determine your degree of control over the product or process.
Standard OrderDesign of experiment (DOE) treatments often are presented in a standard order. In a standard order, the first factor alternates between the low and high setting for each treatment. The second factor alternates between low and high settings every two treatments. The third factor alternates between low and high settings every four treatments. Note that each time a factor is added, the design doubles in size to provide all combinations for each level of the new factor.
Statistic Any number calculated from sample data, describes a sample characteristic
Statistical Process Control (SPC)Statistical process control is the application of statistical methods to analyze and control the variation of a process.
StratificationA stratifying factor, also referred to as stratification or a stratifier, is a factor that can be used to separate data into subgroups. This is done to investigate whether that factor is a significant special cause factor.
SubgroupingMeasurement of where you can get.
Test-Retest StudyDetermine the “Precision” of the system, instrument, device, or gage—where Precision = Measurement Error = Repeatability
Tolerance RangeTolerance range is the difference between the upper specification limit and the lower specification limit.
Total Observed VariationTotal observed variation is the combined variation from all sources, including the process and the measurement system.
Total Prob of DefectThe total probability of defect is equal to the sum of the probability of defect above the upper spec limit-p(d), upper-and the probability of defect below the lower spec limit-p(d), lower.
Transfer functionA transfer function describes the relationship between lower level requirements and higher level requirements. If it describes the relationship between the nominal values, then it is called a y-hat model. If it describes the relationship between the variations, then it is called an s-hat model.
TransformationsUsed to make non-normal data look more normal.
Trivial manyThe trivial many refers to the variables that are least likely responsible for variation in a process, product, or service.
T-testA t-test is a statistical tool used to determine whether a significant difference exists between the means of two distributions or the mean of one distribution and a target value. See the t-test tools.
Tukey’s (1-wayANOVA): Check to obtain confidence intervals for all pairwise differences between level means using Tukey’s method (also called Tukey’s HSD or Tukey-Kramer method). Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05.
Unexplained Variation (S)Regression statistical output that shows the unexplained variation in the data.  Se = sqrt((sum(yi-y_bar)^2)/(n-1))
UnitA unit is any item that is produced or processed.
USLAn upper specification limit, also known as an upper spec limit, or USL, is a value below which performance of a product or process is acceptable.
Validation TestingPurpose:  To insure product is performing reliably in customer environment.
Variancein statistics, the square of the standard deviation of a sample or set of data, used procedurally to analyze the factors that may influence the distribution or spread of the data under consideration.  Also a measurement of concentration of data.
VariationVariation is the fluctuation in process output. It is quantified by standard deviation, a measure of the average spread of the data around the mean. Variation is sometimes called noise.  Variance is squared standard deviation.
Variation (common cause)Common cause variation is fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data. It is a measure of the process potential, or how well the process can perform when special cause variation is removed; therefore, it is a measure of the process’s technology.  Also called, inherent variation
Variation (special cause)Special cause variation is a shift in output caused by a specific factor such as environmental conditions or process input parameters. It can be accounted for directly and potentially removed and is a measure of process control, or how well the process is performing compared to its potential.  Also called non-random variation.
Weibull PlotA plot where the x-axis is scaled as ln(time) and the y-axis is scaled as ln(ln(1 / (1-CDF(t))).  The Weibull CDF plotted on Weibull paper will be a straight line of slope b and y intercept = ln(ln(1 / (1-CDF(0))) = h. 
WhiskerFrom box plot…displays minimum and maximum observations within 1.5 IQR (75th-25th percentile span) from either 25th or 75th percentile. Outlier are those that fall outside of the 1.5 range.
YieldYield is the percentage of a process that is free of defects.
Yield:  ClassicalClassical yield is the number of defect-free parts for the whole process divided by the total number of parts inspected.
Yield:  First TimeFirst time yield is the number of defect-free parts divided byt the total number of parts inspected for the first time.
Yield:  ThroughputPercentage of units that pass through an operation without defects.
ZA Z value is a data point’s position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and corresponds to the process sigma value that is reported by the businesses. For example, a 3 sigma process means that three standard deviations lie between the mean and the nearest specification limit. Three is the Z value.
Z benchZ bench is the Z value that corresponds to the total probability of a defect.  It is calculated using the following steps:  (1). Calculate Z(LSL) = (xbar – LSL) / S ;  (2) Lookup Z-value in table to determine P(d) LSL ; (3) Calculate Z(USL) = (USL – xbar) / S ; (4)  Lookup Z-value in table to determine P(d) USL ; (5) Sum P(d) LSL + P(d) USL and use Z-table to find Zbench (i.e., find the P(d) number)
Z ltZ long term (ZLT) is the Z bench calculated from the overall standard deviation and the average output of the current process. Used with continuous data, ZLT represents the overall process capability and can be used to determine the probability of making out-of-spec parts within the current process.
Z shiftZ shift is the difference between ZST and ZLT. The larger the Z shift, the more you are able to improve the control of the special factors identified in the subgroups.
Z stZST represents the process capability when special factors are removed and the process is properly centered. ZST is the metric by which processes are compared.

100% Self-Paced

self-paced

LEAN SIX SIGMA BLACK BELT

[Includes Black, Green & green BELT CONTENT]

COURSE LENGTH 120 HOURS

$1497 Include Mock & Certification Exams
FULL REFUND WITHIN 7 DAYS
  • Included: Certification Exam Voucher
  • Included: Mock (Evaluation)Exam Voucher (Value: $79)
  • Included: Mock Project (immersive Six Sigma Capstone Case Study, data files & templates) (Value: $600)
  • Program Length: 120 hours
    **Includes 12 Months Access to E-Learning Portal**
  • 35 PMI PDU's Available
  • 100% Online - Self Paced with assigned MBB Instructor for grading assignments and unlimited email support. If you get stuck ... just let us know and we will jump on a call with you.

self-paced

LEAN SIX SIGMA GREEN BELT

[Includes Green & Yellow BELT CONTENT]

COURSE LENGTH 80 HOURS

$997 Include Mock & Certification Exams
FULL REFUND WITHIN 7 DAYS
  • Included: Certification Exam Voucher
  • Included: Mock (Evaluation) Exam Voucher (Value: $69)
  • Included: Mock Project (immersive Six Sigma Capstone Case Study, data files & templates) (Value: $600)
  • Program Length: 80 hours **Includes 12 Months Access to E-Learning Portal**
  • 20 PMI PDU's Available
  • 100% Online - Self Paced with assigned MBB Instructor for grading assignments and unlimited email support. If you get stuck ... just let us know and we will jump on a call with you.
self-paced


LEAN SIX SIGMA YELLOW BELT


COURSE LENGTH 16 HOURS
$697 Include Mock & Certification Exams
FULL REFUND WITHIN 7 DAYS
  • Included: Certification Exam Voucher
  • Included: Mock (Evaluation) Exam Voucher (Value: $39)
  •  
  • Program Length: 16 hours **Includes 12 Months Access to E-Learning Portal**
  • 10 PMI PDU's Available
  • 100% Online - Self Paced with assigned MBB Instructor for grading assignments and unlimited email support. If you get stuck ... just let us know and we will jump on a call with you.

Live Instructor-Led [Via Zoom]
Feb 20 – 24, 2023 | 10AM – 6PM EST Daily

Experiential, Interactive Sessions with Homework.

Class Sizes Limited … Fills up Fast!!!

Reserve Your spot Now. Satisfaction Guaranteed!!

Live-Virtual

[includes self-paced]

[Includes Black, Green & yellow BELT CONTENT]

feb 20 - 24, 2023

$2497
BB: 5 days
  • Includes Access to Self-Paced E-Learning; Mock Exam & Certification Exam Voucher

Live-Virtual

[includes self-paced]

[Includes Black, Green & yellow BELT CONTENT]

feb 20 - 22, 2023

$1997
GB: 3 days
  • Includes Access to Self-Paced E-Learning; Mock Exam & Certification Exam Voucher

Live-Virtual

[includes self-paced]

[Includes Black, Green & yellow BELT CONTENT]

feb 20 - 21, 2023

$997
YB: 2 days
  • Includes Access to Self-Paced E-Learning; Mock Exam & Certification Exam Voucher

Project Coaching
for Corporate Teams
5 Person Minimum

Green Belt

$5000
Call For Details
  •  

Subscribe me to the newsletter

* indicates required