Total Quality Management for Project Management by Kim H. Pries, Jon M. Quigley


Total Quality Management for Project Management
By Kim H. Pries, Jon M. Quigley

Total Quality Management for Project Management

I. Acknowledgments xix
II. About the Authors xix
III. Preface xxi
IV. The State of the Art? xxiii
I. Rubric 1
II. Questions to Ponder 1
III. Why TQM Is Important to the Project Manager 2
IV. TQM Project Manager Scenario 2
V. Total Quality Management Prerequisites 3
VI. Organizational Attributes 5
VII. PDCA—Shewhart Cycle 6
VIII. Project Management 7
IX. What Is Program Management? 8
X. Why TQM Is Not Another Management Fix 10
XI. How to Change the Culture 10
XII. Eliminating Junk Activities 12
XIII. Exercises 14
I. Rubric 15
II. Questions to Ponder 15
III. Why Metrics and Requirements Are Important to the
Project Manager 16
IV. TQM Project Manager Scenario 16
V. Product Requirements 17
VI. Project Requirements 31
VII. Derived Requirements 34
VIII. Internal Requirements 34
IX. Regulatory Requirements 34
X. Standards 35
XI. Exercises 40
I. Rubric 41
II. Questions to Ponder 41
III. Why TQM Tools Are Important to the Project Manager 42
IV. TQM Project Manager Scenario 42
V. Benefits to the TQM Project Manager 43
VI. Pareto Chart 44
VII. Scatter Plots 46
VIII. Control Charts 54
IX. Selection of Variable 55
X. Flow Charts 56
XI. Ishikawa Diagram (Fish Bone Diagram, Cause and
Effect Diagram) 58
XII. Histogram/Bar Graphs 58
XIII. Checklists/Check Sheets 59
XIV. Exercises 61
I. Rubric 63
II. Questions to Ponder 63
III. Why Project Management Tools Are Important to the
Project Manager 64
IV. TQM Project Manager Scenario 64
V. Scope 65
VI. Project Estimating Techniques 66
VII. Project Budgeting 75
VIII. Cost Estimating 76
IX. Project Scheduling Fundamentals 79
X. Communications Basics 82
XI. Project Metrics and Control 85
XII. Risk Management Fundamentals 96
XIII. Project Termination Techniques 100
XIV. Exercises 107
I. Rubric 109
II. Questions to Ponder 109
III. Why Statistics and Control Are Important to the
Project Manager 110
IV. TQM Project Manager Scenario 110
V. What Does Control Mean? 111
VI. Project Risk and Management 111
VII. Attributes Data 113
VIII. Variables Data 116
IX. Statistical Process Control (SPC) in Use 116
X. Exercises 120
I. Rubric 121
II. Questions to Ponder 121
III. Why Process Analysis and Improvement Are Important to the Project
Manager 122
IV. TQM Project Manager Scenario 122
V. Functional Decomposition 123
VI. Work Breakdown Structures 123
VII. Scope of Work 130
VIII. Exercises 143
I. Rubric 145
II. Questions to Ponder 145
III. Why Process Controls and Metrics Are Important to the Project
Manager 146
IV. TQM Project Manager Scenario 146
V. Risk Management 147
VI. Hazard Analysis and Critical Control Point Method 155
VII. Scope 156
VIII. Communication 159
IX. Change Management 160
X. Exercises 174
I. Rubric 177
II. Questions to Ponder 177
III. Why Inspection and QA Are Important to the Project Manager 177
IV. TQM Project Manager Scenario 178
V. Inspection with Attributes 179
VI. Inspection with Variables 181
VII. Skip Lot Inspection 182
VIII. Continuous Sampling Plans 182
IX. Dodge-Romig 183
X. First Article Inspection 183
XI. What Is a Meaningful Sample? 183
XII. Failure Types 184
XIII. Inspections and Project Management 184
XIV. Exercises 184
I. Rubric 187
II. Questions to Ponder 187
III. Why Statistics and Control Are Important to the
Project Manager 188
IV. TQM Project Manager Scenario 188
V. Tracking Metrics 189
VI. Product Quality over Time 195
VII. Project Quality over Time 196
VIII. Exercises 196
CHAPTER 10 – Other Supporting Initiatives 199
I. Rubric 199
II. Questions to Ponder 199
III. Why Maturity Models Are Important to the Project Manager 199
IV. TQM Project Manager Scenario 200
V. Capability Maturity Models 201
VI. Exercises 228
Appendix 1 – Change Management 231
I. Change Management 231
II. Configuration Management 231
APPENDIX 2 – TEMP example 241
I. Overview 241
II. Integrated Test Program Summary 245
III. Developmental Test and Evaluation Outline 247
IV. Operational Test and Evaluation Outline 251
V. Test and Evaluation Resource Summary 253
I. Product Verification 263

List of Figures

Figure 1.1 The butterfly effect from chaos theory suggests small causes can have large effects. 2
Figure 1.2 Example of a well-defined organization hierarchy. 4
Figure 1.3 The Shewhart Plan-Do-Check-Act (PDCA) cycle. 6
Figure 1.4 PM balancing act of the stakeholder expectations and resources. 8
Figure 1.5 Program-project hierarchy. 9
Figure 1.6 No silver bullet for project management! 10
Figure 1.7 Process flow from supplier to customer, whether internal or external. 11
Figure 1.8 Organization chart outline showing bottom-up improvement and
top-down improvement – one key to cultural change. 12
Figure 1.9 Relations of different engineering organizations in a typical enterprise. 13
Figure 2.1 A typical instrument cluster for truck. 18
Figure 2.2 Variety of product requirements and demands. 19
Figure 2.3 A typical test “buck,” with all controllers centralized and powered. 21
Figure 2.4 The entire vehicle HIL rig, showing the electrical controller
unit (ECU) cabinet. 22
Figure 2.5 Hardware in the loop rig with expanded graphical controls. 23
Figure 2.6 Pressure sensor calibration fixture. 24
Figure 2.7 Oscilloscopes perform key analyses for the design and testing teams. 25
Figure 2.8 Signal generator. 26
Figure 2.9 An improvement in tolerance philosophy results in increased margins. 27
Figure 2.10 Acceptable switch performance. Note the clean signal response. 29
Figure 2.11 Unacceptable switch performance. 30
Figure 2.12 Use the test tool to develop stimuli that replicate challenges generally
found on the vehicle. 31
Figure 2.13 Radar diagram comparing project targets to actual. 32
Figure 2.14 Note how charts and graphs provide quick, intuitive indication of project
status. 33
Figure 2.15 Examples of regulatory agencies in the automotive world. 34
Figure 2.16 Seen is a SAE transient fixture, which replicates EMC bulk current
injection and SAE transient simulation/injection. 35
Figure 2.17 The effect of insufficient data; insufficient detail makes it difficult to
draw conclusions. 37
Figure 2.18 Range of testing is defined by criticality of the function. 38
Figure 2.19 Simplistic icons on a dashboard mislead more than they inform. 38
Figure 2.20 Artifacts. 39
Figure 3.1 An example of a typical Pareto chart showing the order data. The line
helps discern the 80% point. 43
Figure 3.2 This Pareto chart shows the issues with an instrument cluster on a motor
vehicle. 44
Figure 3.3 Pareto of instrument cluster failures by cost indicates where we find the most monetary damage. 45
Figure 3.4 Scatter charts show the correlation (not causation) between two factors. 46
Figure 3.5 Scatter plot of vehicle preparation for systems integration test shows a
range of durations to make the vehicle a suitable test subject. 48
Figure 3.6 Problems found with vehicle test subject that must be addressed prior to
systems integration testing. 49
Figure 3.7 Old method of manually testing harness continuity checks. 50
Figure 3.8 Drawings and pinout descriptions at the ready for manual testing of wire
harness. 51
Figure 3.9 The NXPro graphic – the tool that does the point-to-point wire harness
checks comparing an input file. 51
Figure 3.10 Dynalab and cabinet test fixture for automated point-to-point testing. 52
Figure 3.11 An example of a multi-lead sensor calibration setup. 53
Figure 3.12 Example of a flow chart. 57
Figure 3.13 Ishikawa (cause-and-effect, fish bone) diagram helps catalog potential
failure modes. 58
Figure 3.14 A typical normal distribution from relatively random data. 59
Figure 3.15 Example of countdown to production checklist. 60
Figure 3.16 Simple Gantt chart of an arbitrary project. 60
Figure 4.1 If we have a set of separated tasks, we must link them based on dependencies. 66
Figure 4.2 Tasks begin to fall into place as we analyze them. 67
Figure 4.3 Example of a network diagram, providing more information than we get
from a Gantt chart. 67
Figure 4.4 The beta distribution models the array of values we can use to predict our
task completion. 68
Figure 4.5 PERT task variance in tabular format. 69
Figure 4.6 PERT probability for durations. 69
Figure 4.7 The impact of task variation on dependent tasks. 70
Figure 4.8 Bugs or failures reported by individual allows us to check for potential
skills problems. 73
Figure 4.9 Performance over time with average lines showing trends. 74
Figure 4.10 We begin to connect our tasks in a network that reflects the dependencies,
also known as a directed graph. 79
Figure 4.11 The closer we get to the target, the better our estimates. 81
Figure 4.12 One model for communications. 83
Figure 4.13 A good escalation process provides a rational approach to bringing attention
to a situation. 85
Figure 4.14 A tabular view of a relatively “good” schedule performance index. 86
Figure 4.15 Organization A SPI for multiple projects. 87
Figure 4.16 EVM demonstrated with a two stage project. 88
Figure 4.17 Example of Schedule Performance Index from a company with a wider variation. 88

This book is US$10
To get free sample pages OR Buy this book

Share this Book!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.