Earned Value Management (EVM) as an Oversight Tool for Major Capital Investments







Prepared for Members and Committees of Congress



This report focuses on a technique—earned value management (EVM)—for overseeing the cost,
schedule, and performance of major capital investments during the investment process (e.g.,
information technology systems, structures, weapons systems). EVM provides metrics to help
inform assessments of whether capital investments are “on track” from three perspectives: the
investment’s planned cost, time schedule, and functionality. Variance from a project’s planned
cost, schedule, and functionality might occur due to the inherent complexity and uncertainty of a
project, poor planning or implementation, or, sometimes, simply bad luck. Although EVM
attempts to address several significant monitoring and evaluation issues, other evaluation
techniques are typically necessary in order to understand why variance from the planned cost,
schedule, or functionality might be occurring. In addition, other evaluation methods are typically
more useful in assessing other significant questions, such as whether a project is worth
undertaking (or continuing) and, after the investment process is completed, whether an
investment might be having an impact on achievement of an agency’s mission or the success of a
public policy, compared with what would have happened without the investment.
EVM may be of increasing salience in executive branch practices and, implicitly or explicitly, in
presentations to Congress. This report begins by putting EVM within a broader context of risk
management and how capital investments are often evaluated and monitored throughout their life
cycles. Next, the report provides an overview of EVM terminology and concepts by using an
example and illustrating a few potential oversight issues and caveats. The report concludes with
some potential oversight questions for Congress. The report will be updated periodically.






Oversight of Major Capital Investments.........................................................................................1
Managing Risks and Evaluating Projects........................................................................................1
Managing Risks of Adverse Events and Forgone Opportunities...............................................1
Monitoring and Evaluating Capital Investments.......................................................................3
Increasing Executive Branch Adoption of EVM for Budgeting, Management, and
Reporti ng ...................................................................................................................... ................ 4
Related Statutory Directives and Requirements........................................................................4
Budget Formulation, Management, and Contract Management...............................................5
Illustrative and Hypothetical Example of EVM Measurements and Issues....................................7
Purpose of Example..................................................................................................................7
Hypothetical Example: EVM for “Project B”...........................................................................8
Explanation of Planned Value.............................................................................................8
Explanation of Earned Value and Schedule Variance.........................................................9
Explanation of Actual Cost and Cost Variance.................................................................10
EVM as More Useful Tool than Simple Spend Comparisons...........................................12
In Sum: EVM Metrics as Oversight Tools........................................................................12
Caveat: EVM Metrics Might Tell Only Part of the Story........................................................13
Potential Oversight Questions for Congress..................................................................................15
Figure 1. Illustrative Activities and Evaluation Methods During Phases of a Capital
Asset’s Life Cycle........................................................................................................................4
Table 1. Baseline Plan of Work Units, “Project B1”........................................................................8
Table 2. Schedule Variance of Work Units, “Project B1”................................................................9
Table 3. Cost Variance of Work Units, “Project B1”.......................................................................11
Table 4. Spend Comparison of Work Units, “Project B1”.............................................................12
Author Contact Information..........................................................................................................17






For the federal government, capital investments include development and acquisition of major
capital assets, such as equipment (e.g., hardware for weapons systems, satellites, and information
technology (IT)); structures (e.g., the Capitol Visitor Center); intellectual property (e.g.,
software); and combinations thereof. A capital investment can play a critical role, therefore, in the
accomplishment of agency missions and the success of public policies. Congress has often taken
a strong interest in the performance, cost, and schedule of major capital investments. In addition,
Congress has taken a strong interest in establishing and maintaining accountability mechanisms to
ensure that the President, Office of Management and Budget (OMB), and agencies fulfill
statutory duties, accomplish prescribed public policy goals, report transparently and fully, and act 1
as careful stewards of public resources.
Earned Value Management (EVM), which is the subject of this report, is a management technique
that provides metrics to help inform assessments of whether a capital investment is “on track”
from three perspectives: planned cost, time schedule, and functionality (the latter concept often
referred to as “performance,” “technical performance,” “capability,” “scope of work,” or
“planned value”). Using these three perspectives, EVM compares the planned cost, schedule, and 2
functionality against an (ideally) accurate measurement of what is actually happening.
Frequently, EVM metrics serve to raise potential issues that require further investigation or study,
rather than provide the full picture of a project’s progress and status. EVM may be of increasing
salience in executive branch practices and, implicitly or explicitly, in presentations to Congress.
EVM might also be used as an oversight tool for Congress.
This report provides an overview of EVM terminology and concepts by using a hypothetical
example and illustrating a few potential oversight issues. To place EVM in context, however, it is
necessary to discuss first the risks associated with major capital investments, how capital
investments are often evaluated and monitored throughout their life cycles, and how EVM
appears to be increasingly adopted by executive branch agencies. After providing the hypothetical
EVM example and some related caveats, the report concludes with potential oversight questions
for Congress regarding EVM.

The planning and implementation of a major capital investment are not simple tasks.
Consequently, major capital projects in both the public and private sectors are considered to be
inherently risky. For purposes of capital asset investment, risk might be described as the

1 For example, see U.S. Congress, Senate Committee on Homeland Security and Governmental Affairs, Subcommittee
on Federal Financial Management, Government Information, and, International Security, IT Programs at Risk: Is It Too thnd
Late to Save $12 Billion? hearing, 109 Cong., 2 sess., Sept. 7, 2006 (Washington: GPO, 2007).
2 This report’s purpose is to provide an overview of EVM for a congressional audience and highlight potential
oversight issues, rather than provide a comprehensive overview of the mechanics of EVM. Readers with questions
about EVM terms, concepts, and procedures are encouraged to consult with the author, who can answer questions and
provide citations to relevant resources.





probability that adverse events will occur or favorable opportunities will not be exploited. Unless
a project is carefully managed, risk might increase substantially in both of these senses. Adverse
events that might occur include schedule delays, cost overruns, and performance shortfalls or
failures. Favorable opportunities that might not be pursued (or noticed) could include chances to
complete a project more quickly, at less cost, and with better than planned performance in
achieving the mission or one or more goals. In response, agencies must make substantial efforts
to plan effectively and manage risk in order to achieve successful outcomes. Given the
complexity and inherent risk of major capital investments, however, it should be noted that
problems cannot always be avoided, even in well-planned and implemented projects.
Many stakeholders—including Congress, the President, agency employees, and the public—often
take a strong interest in the performance, cost, and schedule of major capital investments. The
interest stems directly from the mission-critical nature of many investments. Furthermore, interest
in specific capital investments can become heightened when something has gone wrong. Indeed,
the consequences of a poorly managed capital investment project can be significant, and when 3
investments in major capital assets fail, they can fail spectacularly.
Journalists, inspectors general (IGs), and the Government Accountability Office (GAO) are
sometimes the first actors to disclose publicly that a major capital investment by the U.S.
government has gone awry. Nonetheless, such disclosures are often lagging indicators of
problems, rather than leading or coinciding indicators. Furthermore, capital investment problems
themselves can be lagging indicators of more fundamental problems with management capacity, 4
transparency, and accountability.

3 Many years and millions of dollars can be wasted on a large project, with little or nothing other than frustration to
show for the effort. For example, the Federal Bureau of Investigations (FBI’s) Virtual Case File (VCF) IT project, part
of the FBI’sTrilogy IT investment initiative, was abandoned in 2005 after $170 million was spent. See Chris Strom,
FBI Solicits Proposals for New Information System,” GovExec.com, Aug. 9, 2005, available at
http://www.govexec.com/dailyfed/080905c1.htm; and U.S. Department of Justice, Office of the Inspector General, The
Federal Bureau of Investigation’s Management of the Trilogy Information Technology Modernization Project, report
05-07, Feb. 2005, available at http://www.usdoj.gov/oig/reports/FBI/a0507/final.pdf. Even when a project has not
necessarilyfailed,” preventable cost, schedule, and performance problems can also have unwelcome consequences.
Adverse events like these can hamper an agency’s ability to accomplish its public policy mission and goals, as intended
by Congress; waste funds; disrupt the strategies and plans of stakeholders (e.g., non-governmental organizations or
state and local governments); and reduce public confidence in government’s stewardship of tax dollars.
4 Major capital investment problems can result from many factors, including the inherent complexity and uncertainty of
a project, poor planning or implementation during the investment process for a new or existing capital asset, or,
sometimes, simply bad luck. For example, some deviations from the planned cost, schedule, or functionality are not
necessarily reflective of mismanagement, but rather an inevitable result of a project’s complexity and uncertainty.
Other deviations might be characterized as management problems, however. Long before a project starts, insufficient
research and planning can ensure future difficulties. An agency or official might not complete the necessary research
and planning to understand fully the changes that need to be made, perhaps, or in what sequence the changes should be
pursued. The research and planning might be insufficient, at root, because of an omission of an important
consideration, insufficient resources, lack of involvement or cooperation among stakeholders, systematically poor
management, or a superior’s pressure or decision to proceed against a project manager’s or executives
recommendation. Implementation problems might also surface once a project has begun. Potential implementation
problems include insufficient coordination, top management attention or discipline, analytical capacity, resources,
skills, cooperation, or oversight. Problems can also result from unforeseen events, like major changes in circumstances
or technology, that might render a project obsolete before it is finished. Still other events that cause problems might be
foreseeable, but plausibly judged so unlikely that they need not be factored into decision making. Distinguishing among
potential causes of problems (e.g., uncertainty, poor management, bad luck) can be a challenge, however, requiring
further investigation and evaluation.





In tracking the planned versus actual progress regarding the planned cost, schedule, and
functionality of a major capital asset, EVM attempts to address several significant monitoring and
evaluation issues. EVM does not, however, address all important monitoring and evaluation
issues that could contribute to an assessment of an investment’s status and progress. Therefore,
EVM’s use as a management and oversight tool for major capital investments arguably should be
considered within a larger context.
A variety of techniques can be used to monitor and evaluate major capital investments
prospectively, concurrently, and retrospectively. Furthermore, these activities can occur
throughout the life cycle of a capital asset—that is, during initial planning, the acquisition or 5
development process, and operational use, as well as after disposal. Because EVM is intended to
inform assessments of whether a capital investment is on track with respect to planned cost,
schedule, and functionality, EVM takes place primarily during the acquisition process for a new
asset (or modification of an existing asset), but can also occur during planning. Figure 1 provides
illustrative lists of capital planning and programming activities that agencies often undertake
during each of the four phases of a capital investment’s life cycle. The figure also shows
corresponding lists of the monitoring and evaluation activities, including EVM, that are often 6
used to support the agency activities.

5 In a capital investment context, monitoring and evaluation efforts are used to address several challenges, including (1)
identifying, analyzing, and proposing investment opportunities that could improve performance of a policy or agency
mission; (2) managing risk during the capital investment’s planning, acquisition (and development), and operational
use; (3) facilitating learning, improvement, and achievement of required functionality during operational use, in support
of a policy or agency mission; and (4) providing for transparency and accountability in achieving all of the above.
6 The phases are described (according to OMB’s perspective) in U.S. Executive Office of the President, Office of
Management and Budget, Circular No. A-11, Part 7, “Planning, Budgeting, Acquisition, and Management of Capital
Assets,” July 2007; and Part 7, supplement, “Capital Programming Guide, version 2.0, June 2006, both available at
http://www.whitehouse.gov/omb/circulars/a11/current_year/a11_toc.html. Definitions of program evaluation terms,
techniques, and concepts can vary among authors. Nevertheless, some resources can provide initial assistance in
understanding how techniques might apply to a capital investment context. See Sandra Mathison, ed., Encyclopedia of
Evaluation (Thousand Oaks, CA: Sage, 2005); and Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao, eds.,
The Sage Encyclopedia of Social Science Research Methods (Thousand Oaks, CA: Sage, 2004). Few publications
provide an overview of EVM for non-practitioners. For background on EVM intended for a practitioner audience, see
Alan Webb, Using Earned Value: A Project Managers Guide (Hants, UK: Gower, 2003). For answers to frequently
asked questions about EVM (including a glossary of terms) intended mainly for U.S. federal government practitioners,
see http://www.acq.osd.mil/pm/faqs/faq.htm. For discussion of possible congressional roles concerning program
evaluation, including a glossary of selected terms and concepts, see CRS Report RL33301, Congress and Program
Evaluation: An Overview of Randomized Controlled Trials (RCTs) and Related Issues, by Clinton T. Brass, Blas
Nuñez-Neto, and Erin D. Williams.





Figure 1. Illustrative Activities and Evaluation Methods During Phases of a Capital
Asset’s Life Cycle
Source: CRS analysis.


Concerns about project risks are not new, nor are efforts to mitigate risks of major capital
investments and prevent failures. For example, the first edition of OMB’s Capital Programming
Guide for agencies was published in 1997. Before then, Congress had enacted statutory
provisions that were related to major capital investments, including, among others, a provision
amending the Federal Property and Administrative Services Act of 1949 (41 U.S.C. § 263(a)),
which now declares that “[i]t is the policy of Congress that the head of each executive agency
should achieve, on average, 90 percent of the cost, performance, and schedule goals established 7
for major acquisition programs of the agency”; provisions to require the Secretary of Defense
and heads of all executive agencies to establish cost, performance, and schedule goals for major 8
acquisition programs; and the Clinger-Cohen Act of 1996 (P.L. 104-106, later codified and

7 This language was originally enacted in somewhat different form in Title V of the Federal Acquisition Streamlining
Act of 1994 (FASA V; P.L. 103-355, 108 Stat. 3351) and was later amended to reach its present form by the National
Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85, 111 Stat. 1851).
8 Originally enacted in FASA V; 108 Stat. 3349 (DOD), 3351 (civilian agencies); currently codified at 10 U.S.C. §
(continued...)





amended), which gave OMB and agencies explicit duties for monitoring the progress of
information system investments. The duties included a requirement for agencies to
provide the means for senior management personnel of the executive agency to obtain timely
information regarding the progress of an investment in an information system, including a
system of milestones for measuring progress, on an independently verifiable basis, in terms 9
of cost, capability of the system to meet specified requirements, timeliness, and quality[.]
OMB has issued a number of requirements and directions to executive agencies that specifically
concern EVM. For example, OMB included in its Circular A-11—an annual document that
communicates budget formulation and execution requirements to agencies—detailed directions
for agencies to use EVM for major capital assets (Part 7). Among other things, the circular
directed agencies to submit an “exhibit 300” (“Capital Asset Plan and Business Case Summary”)
for each “major investment” as part of their annual budget submissions to OMB, for the White
House’s use when formulating the President’s annual budget proposal for congressional
consideration. The circular also said that an exhibit 300 is “designed to coordinate OMB’s
collection of agency information for its reports to the Congress required by the Federal 10
Acquisition Streamlining Act of 1994 (FASA Title V) and the Clinger-Cohen Act of 1996.” The
circular defined EVM as follows, citing an industry standard to which agencies would be
expected to adhere:
Earned value management (EVM) is a project (investment) management tool that effectively
integrates the investment scope of work with schedule and cost elements for optimum
investment planning and control. The qualities and operating characteristics of earned value
management systems are described in American National Standards Institute
(ANSI)/Electronic Industries Alliance (EIA) Standard—748—1998, Earned Value
Management Systems, approved May 19, 1998. It was reaffirmed on August 28, 2002. A
copy of Standard 748 is available from Global Engineering Documents (1-800-854-7179).
Information on earned value management systems is available at http://www.acq.osd.mil/
pm.i1
Citing “room for improvement in the execution of our IT projects,” OMB has also provided
directions concerning major IT capital investments to agency Chief Information Officers in an 11
August 2005 memorandum. The memorandum stated that, at the time, agencies were “already
required to meet four principal criteria” for IT capital investments. The criteria included, along
with two others, (1) the establishment and validation of a performance measurement baseline with 12
clear cost, schedule, and performance goals; and (2) the management and measurement of

(...continued)
2220 and 41 U.S.C. § 263(a), respectively.
9 40 U.S.C. § 11312(b)(6).
10 Ibid., p. 6.
11 U.S. Executive Office of the President, Office of Management and Budget, memorandum from Karen S. Evans,
Administrator, Office of E-Government and Information Technology,Improving Information Technology (IT) Project
Planning and Execution, M-05-23, Aug. 4, 2005, available at http://www.whitehouse.gov/omb/memoranda/fy2005/
m05-23.pdf.
12 As explained later, the term baseline essentially refers to a capital investment project’s explicitly planned costs,
schedule for completion, and functionality.





projects to within 10% of baseline goals through the use of an earned value management system
compliant with the ANSI standard cited in Circular A-11. The memorandum also added new
expectations for agencies, including directions to (1) ensure that cost, schedule, and performance
goals for new major IT projects are “independently validated for reasonableness” before
beginning development; and (2) provide, for all ongoing major IT projects with development
efforts, for independent validations of then-current cost, schedule, and performance baselines by 13
March 31, 2006, and submission of proposed changes of baselines for OMB approval.
EVM requirements have also been added to executive branch acquisition regulations. On July 5,
2006, three agencies promulgated a final rule on behalf of the Civilian Agency Acquisition
Council and the Defense Acquisition Regulations Council to amend the Federal Acquisition
Regulation (FAR). The stated purpose of the change was, among other things, to “implement
earned value management system (EVMS) policy in accordance with OMB Circular A-11, Part 7
and the supplement to Part 7, the Capital Planning [sic] Guide” and “help standardize the use of
EVMS across the Government” for “those parts of [an] acquisition where developmental effort is 14
required.” The FAR changes required agencies to use an EVMS for certain major acquisitions 15
for development, in accordance with Circular A-11, and potentially also for other acquisitions,
in accordance with individual agency procedures. The new regulations also specified that “[t]he
qualities and operating characteristics of an earned value management system are described” in
the ANSI/EIA Standard 748 that was cited by Circular A-11 (see the extended Circular A-11
quotation, above). Under the regulations, agency contracting officers were directed to, “at a
minimum,” “require contractors to submit EVMS monthly reports for those contracts for which
an EVMS applies.”
In addition, OMB says it has used EVM data to determine agency scores under the George W.
Bush Administration’s initiative concerning the management of agencies in the executive 16
branch—the President’s Management Agenda (PMA). Although the PMA’s standards are
publicly available, PMA evaluation practices have not been fully transparent outside of OMB and

13 The memorandum explained what was meant by independent as follows: “An independent assessment may be
performed by a qualified source provided such source is not involved in the projects development, implementation,
management, or direct supervision. Provided they are qualified, such source may include the agency Inspector General,
current independent verification and validation reviewers, or any other source internal to the agency or outside the
agency including another agency. Agencies currently using Integrated Baseline Reviews (IBRs), may substitute an IBR
for an independent assessment. Reasonable baselines are accurate, relevant, timely, and complete. An IBR has been
defined as “[a] joint Government/contractor review to assess the realism and accuracy of the integrated performance
measurement baseline (work, schedule, and budget) (see U.S. Department of Defense, Earned Value Management
Implementation Guide, Oct. 2006, p. 92, available at https://acc.dau.mil/CommunityBrowser.aspx?id=19557).
14 U.S. Department of Defense, U.S. General Services Administration, and U.S. National Aeronautics and Space
Administration, “Federal Acquisition Regulation; FAR Case 2004-019, Earned Value Management System (EVMS),”
71 Federal Register 38238, July 5, 2006.
15 OMB Circular A-11 saysDevelopment/Modernization/Enhancement (DME) means the program cost for new
investments, changes or modifications to existing systems to improve capability or performance, changes mandated by
the Congress or agency leadership, personnel costs for investment management, and direct support” (italics in original).
OMB contrasts DME with another term: “Steady State (SS) means maintenance and operation costs at current
capability and performance level including costs for personnel, maintenance of existing information systems, corrective
software maintenance, voice and data communications maintenance, and replacement of broken IT equipment.” See
OMB Circular A-11, Sec. 53.
16 See http://www.whitehouse.gov/results/agenda/standardsforsuccess08-2007.pdf forExpanded Electronic
Government” scorecard criteria that explicitly cite EVM data. For background on the PMA, see CRS Report RS21416,
The President’s Management Agenda: A Brief Introduction, by Virginia A. McMurtry.





the executive agencies.17 Detailed rationales and worksheets behind PMA scores are created, but
not publicly available or independently validated.


Based on published requirements, it appears that the Bush Administration and most executive
branch agencies are widely adopting EVM for major capital assets for purposes of budgeting,
management, and reporting. Specifically, as noted above, EVM has been formally adopted in
requirements for preparation of the President’s annual budget proposal, agency contract
management, and formal scoring criteria under the Administration’s PMA initiative. It appears,
therefore, that EVM metrics will be presented either implicitly or explicitly to Congress in a
variety of venues, sometimes packaged along with the President’s views and representations
about budgetary and management priorities. Due to the substantially public nature of the budget
process, the metrics will similarly be presented either implicitly or explicitly to the public at
large, some of whom play a significant role in the budget process through provision of
information and opinions to elected officials.
It appears, therefore, that a potential ongoing issue for Congress will be judging whether
underlying EVM metrics are fully, forthrightly, timely, and transparently reported. In addition, it
appears that a corresponding issue for Congress might be understanding, interpreting, and
scrutinizing EVM metrics, as potential inputs to congressional oversight, appropriations of funds,
and authorization (or reauthorization) of agency and presidential activities. In each of these
situations, EVM information might be relevant to informing Congress’s assessments of whether
acquisitions of major capital assets are on track with respect to planned cost, schedule, and
functionality and, in turn, inform subsequent decision making.
With these considerations in mind, the following tables and corresponding text present a
hypothetical example of EVM metrics being applied to the development and acquisition of a
major capital asset. The purpose of explaining how EVM works through an example, here, is to
give concrete illustrations of the types of basic analyses that can be performed with EVM, as well
as to illustrate how EVM can be used to identify potential topics for follow-up investigation and
study (e.g., perhaps with congressional oversight or complementary evaluations included in
Figure 1). The type of asset described here could be any of those described at the beginning of
this report, including a weapons system, satellite, structure, IT system, etc. The example
presented here is adapted from an example that has been published in a variety of forms for a 18
number of years in federal government websites and publications, but also draws on other

17 For discussion, see CRS Report RL32388, General Management Laws: Major Themes and Management Policy
Options, by Clinton T. Brass, section titledMaking and Measuring Progress,” pp. 19-26.
18 The Department of Defense (DOD) has maintained a website on EVM for many years, including a briefIllustrative
Explanation of Earned Value, available at http://www.acq.osd.mil/pm/faqs/evbasics.htm. This report’s adaptation
draws some material verbatim from the DOD website. OMB’s Circular A-11 and Capital Programming Guide have
also included a nearly identical example for a number of years (App. 3 of the Capital Programming Guide, and some
previous versions of the circular). The definition of EVM in Circular A-11 points to the DOD website
(continued...)





resources, as noted below. Some EVM-related terms are italicized and explained as they are 19
introduced.
For this hypothetical example, Congress has been presented with EVM data about a major capital
investment called “Project B.” The project has six work units, represented by the letters “A” 20
through “F,” which are interrelated, but distinct subcomponents of the project. For a simple
example relating to a structure, work units might include concrete, framing, roofing, electrical,
plumbing, and interior. Collectively, work units A through F are also referred to as the project’s 21
work breakdown structure (WBS). In this example, Project B has been underway for a period of
time already, and all the work units were planned to be successfully completed by the time of this
EVM report. The original plan for Project B’s costs, schedule for completion, and functionality is 22
referred to as the project’s baseline, identified here as B1 (see Table 1). Later in the life of the
project, it is possible that the project could be “re-baselined” (baselined again with new cost,
schedule, and functionality plans, in order to, for example, conform to a management decision or 23
reflect new circumstances).
Table 1. Baseline Plan of Work Units, “Project B1
Work Units of Project B
Baseline ID Metric Total
A B C D E F
B1 Planned value ($) 10 15 10 25 20 20 100
To summarize the contents of Table 1, the baseline plan B1 for Project B shows that six work
units (A-F) are planned to be completed at an overall cost of $100 for the time period covered by 24
this report. Each work unit is listed with its own planned budget—that is, the amount of money
that is planned to be spent to successfully build and complete the functionality planned for the
work unit. Work unit D, therefore, is planned to be successfully completed within the time period
covered by this EVM report at a cost of $25, and work unit E is similarly planned to be completed

(...continued)
(http://www.acq.osd.mil/pm/) as a place to get “[a]dditional information on EVMS.”
19 EVM practices and terms often differ somewhat from project to project, agency to agency, and type of asset to type
of asset. Therefore, the terminology and practices presented here will not necessarily reflect terminology and practices
for specific agencies, projects, or types of assets. The underlying concepts, however, presumably apply across the
board. Exact definitions of terms are available from a variety of sources, including Circular A-11, the DOD website,
and the ANSI/EIA-748-A-1998,Earned Value Management Systems, standard.
20 Depending on many factors, a major capital investment’s work units might be called modules, useful segments,
components, or other terms. Some of these terms might have special definitions in some circumstances.
21 The work breakdown structure might also subdivide the work units listed here into smaller components.
22 The baseline is also sometimes referred to as the performance measurement baseline (PMB).
23 Because a project can be re-baselined by changing the plan for cost, schedule, and functionality, it is typically always
important to be clear regarding what baseline is the subject of discussion and also the reasons why a project might have
been re-baselined.
24 The sum of all budgets established for the contract is sometimes referred to as Budget at Completion (BAC).





at a cost of $20. These numbers have a special name, planned value,25 that this report will also
generically call a metric. (Other metrics will be introduced as this section progresses.) In a sense,
when the agency that is paying for Project B and the agency’s contractor agree to plan that $25
will be spent to successfully complete work unit D, it could be said that the functionality that
corresponds to work unit D has taken a monetary value of $25. Assigning work unit D’s
functionality a quantitative value (denominated in dollars according to the planned budget for that
functionality), in turn, will allow a quantitative representation of how much of work unit D may
have been completed at any given point in time, as explained below.
Table 2 adds two rows of EVM data, with two new metrics, to the previous table
Table 2. Schedule Variance of Work Units, “Project B1
Work Units of Project B
Baseline ID Metric Total
A B C D E F
B1 Planned value ($) 10 15 10 25 20 20 100
B1 Earned value ($) 10 15 10 10 20 - 65
B1 Schedule variance 0 0 0 -15 0 -20 -35 = -35%
The first row of data in Table 2 shows the same planned value that was portrayed in Table 1 for
Project B and its constituent work units, according to the project’s planned baseline B1. Note that
the planned value for Project B’s various work units continues to sum to $100, the overall cost of
the contract to the agency. The second row is new. This row is intended to show how much
functionality has actually been successfully developed by the date this EVM report was produced.
Work unit A, for example, is shown with an “earned” value (denominated in dollars) of $10. This
means that the functionality that was planned for work unit A has been successfully completed or 26
earned, and becomes earned value, not merely value that was planned. As work is successfully
performed, therefore, it is earned on the same basis as it was planned. Stated differently, work
unit A has earned the full functionality that was planned, and the work unit and overall project
are, therefore, credited with $10 of value that has been earned for purposes of work unit A.
Looking across the table, one sees similar situations with work units B, C, and E. In each case,
the functionality that was planned for these work units has been successfully and fully developed,
or earned. Work units D and F, however, are another story. Table 2 shows that work unit D was
partially, but not fully, completed, and also that work unit F was never started. Only $10 worth of
functionality, out of work unit D’s planned value of $25, has been accomplished. In other words,
only 40% of the work associated with work unit D has been successfully completed, and 60% of
the work has not been successfully completed. As a result, work unit D is said to have a schedule
variance of -$15 ($10 earned value minus $25 planned value equals -$15 schedule variance).
With EVM, the term variance, therefore, implicitly means difference from the relevant plan (or
baseline). The variance is denominated in dollar terms, because dollars (i.e., dollars of planned
value) represent the functionality that is planned to be developed and, eventually, either delivered

25 The term planned value is sometimes referred to as Budgeted Cost for Work Scheduled (BCWS).
26 The term earned value is sometimes referred to as Budgeted Cost for Work Performed (BCWP).





or not delivered. With only $10 of earned value, work unit D is $15 behind schedule, compared
with the baseline B1 planned value of $25. Or, equivalently expressed in percentage terms, work
unit D is 60% behind schedule. Because work unit F was never started, work unit F has a
schedule variance of -$20, and is 100% behind schedule. When work units A through F are
summed, out of the planned value of $100 for Project B, only $65 of the planned functionality has
been successfully developed, thanks to work units D and F falling behind schedule. Overall,
Project B is therefore said to have an earned value of $65, $35 lower than the project’s planned
value, and is said to have a schedule variance of -$35, or in percentage terms, a 35% schedule
variance (i.e., 35% behind schedule compared to what was planned to be accomplished). The
35% schedule variance might also be understood as a shortfall in the functionality that had been
planned to be delivered within the time period of the EVM report.
An explanation of why work units D and F were not successfully completed, however, is not
captured with EVM data. To construct such an explanation, other evaluation and monitoring
methods from Figure 1 might need to be explored. In addition, it would probably be possible to
use qualitative and quantitative information from individuals involved in the project, and
potentially also from stakeholders, to construct interpretations of what might have caused the slip
in schedule. As GAO has cautioned, “[i]t is important to understand that variances are neither
good nor bad. They are merely measures that indicate that work is not being performed according 27
to plan and that it must be assessed further to understand why.”
The schedule variance metric discussed above (i.e., the difference between planned value and
earned value) only captures part of the overall story of Project B: whether the project is on
schedule or behind schedule. One has yet to look at whether the project is on track with regard to
its actual cost, in comparison to the baseline B1. The question of whether Project B is on track
with its planned cost is the subject of Table 3, below.
Table 3 presents two additional EVM metrics that are necessary to assess whether a project is
experiencing cost overruns or coming in under budget compared to the baseline. For each work
unit, the table compares the value of the functionality that was successfully delivered (earned
value, which was discussed above) with the amount of money that was actually spent to achieve 28
that functionality (actual cost, the first of the two new metrics). By subtracting the actual cost
from the earned value, one calculates the second new metric: cost variance.

27 U.S. Government Accountability Office, Cost Assessment Guide: Best Practices for Estimating and Managing
Program Costs, Exposure Draft, GAO-07-1134SP, July 2007, p. 218. Contractor EVM reports often include
explanations of the reasons for variances.
28 The term actual cost is sometimes referred to as Actual Cost of Work Performed (ACWP).





Table 3. Cost Variance of Work Units, “Project B1
Work Units of Project B
Baseline ID Metric Total
A B C D E F
B1 Earned value ($) 10 15 10 10 20 - 65
B1 Actual cost ($) 9 22 8 30 22 - 91
B1 Cost variance 1 -7 2 -20 -2 0 -26 = -40%
Some illustrations provide intuition for understanding cost variance for individual work units and
also the overall project. In Table 3, the first row of EVM data shows that work unit A was 29
successfully delivered, creating earned value of $10. The second row of EVM data shows that
the actual cost of work unit A was only $9, or $1 under the earned value. Subtracting the $9 actual
cost from the $10 earned value results in a $1 cost variance. A positive number for cost variance,
therefore, means something was delivered under budget. Work unit C similarly came in under
budget, as shown by its positive cost variance.
Work units B, D, and E, by contrast, show that cost overruns have occurred.30 A negative cost
variance means that a work unit (or portion of a work unit) was delivered over budget: more
money was spent for the work successfully accomplished than was planned. To achieve the full
functionality that was delivered for work unit B (represented by the work unit’s earned value), the
actual cost was $22, considerably higher than the original budget (planned value). The cost
variance for work unit B is -$7, representing a cost overrun. In percentage terms, the cost overrun
can be calculated by dividing the cost variance (i.e., -$7, the excess amount spent over the
budgeted amount) by the earned value (i.e., $15, the value of the functionality that was delivered).
The cost overrun was, therefore, nearly -47%.
For work unit D, the picture looks even worse. In developing the functionality represented by $10
of earned value (only a portion of the hoped-for functionality represented by $25 in planned
value), the actual cost was $30. That is, only 40% of work unit D’s planned functionality was
delivered successfully, but at a cost that exceeded the budget for the entire work unit. Work unit
D’s cost variance is -$20 ($10 earned value minus $30 actual cost), or -200%. The intuition
behind the -200% figure is as follows. Only a portion of work unit D was successfully delivered
(i.e., $10 out of work unit D’s overall planned value of $25 was successfully delivered, becoming 31
$10 in earned value). Looking at work unit D, overall, the portion of the work unit’s
functionality that was delivered successfully ended up costing three times more than what was
planned ($30 actual cost instead of $10), which is a 200% overrun for that portion of work unit D 32
that was successfully completed.

29 Recall that in Tables 1 and 2, work unit A was planned to be delivered at a cost of $10, and therefore had a planned
value of $10. Full delivery of that functionality would, therefore, result in $10 of earned value.
30 However, simply because a cost overrun has occurred does not necessarily mean the work unit or project was
mismanaged. As discussed earlier in this report, many factors might cause problems (cost overruns, schedule slips, and
functionality shortfalls or failures) for a major capital investment.
31 Stated differently, work unit Ds overall planned value was $25, and the planned value of the work that was
successfully completed was $10.
32 A 200% cost overrun means that costs were triple what was planned, a 100% cost overrun means that costs were
twice what was planned, etc.





Finally, one might assess work unit F. Because work unit F was not started, no money was spent
on the work unit, resulting in a cost variance of zero.
Looking at Project B overall, in terms of its B1 baseline, the earned value for the major capital
investment is $65 (nearly two-thirds of the planned functionality). In achieving $65 in earned
value, however, $91 was spent. Stated differently, the portion of the overall project that was
performed successfully (i.e, the $65 earned value) was, by definition, originally planned to cost
$65 (i.e., the portion had a planned value of $65), but in the end, was completed at an actual, 33
overall cost of $91. These metrics result in an overall cost variance for the project of -$26
(earned value of $65 minus an actual cost of $91), or exactly -40% (-$26 cost variance divided by
the earned value of $65). Project B, therefore, could be said to have experienced a 40% cost
overrun on the capability that has been successfully delivered. The capability that was
successfully delivered in the time period of the EVM report, in turn, was substantially less than
the capability that was planned to be delivered in the time period (-35%, the schedule variance,
which shows a schedule slip and corresponding functionality shortfall).
EVM tutorials often compare EVM metrics with an oversight tool that is often considered not 34
useful: the “spend comparison.” The typical spend comparison approach, whereby contractors
report actual expenditures against planned expenditures, is unrelated to the functionality that was
successfully delivered (see Table 4).
Table 4. Spend Comparison of Work Units, “Project B1
Work Units of Project B
Baseline ID Metric Total
A B C D E F
B1 Planned spend ($) 10 15 10 25 20 20 100
B1 Actual spend ($) 9 22 8 30 22 - 91
B1 Spend variance 1 -7 2 -5 -2 20 9 = 9%
Table 4 shows a simple comparison of planned spending and actual spending, which is unrelated
to work performed and, therefore, potentially misleading and not a very useful comparison. The
fact that the overall amount spent was $9 less than planned for this period is not useful for most
purposes, because the metrics lack comparisons of spending with the functionality that was
successfully delivered.
Considered together, Tables 1, 2, and 3 cover central aspects of EVM: comparing the planned
cost, time schedule, and functionality for a major capital investment against the actual cost, time
schedule, and functionality of what was successfully delivered. Many observers emphasize that
EVM might help facilitate the correction of problems during the investment process. For

33 A portion of a work unit has a planned value, therefore, that is only a portion of the work unit’s overall planned
value.
34 See http://www.acq.osd.mil/pm/faqs/evbasics.htm.





example, as one prominent federal agency website asserts, from a practitioner perspective, “[t]he
benefits to project management of the earned value approach come from the disciplined planning
conducted and the availability of metrics which show real variances from plan in order to 35
generate necessary corrective actions.”
Another potential benefit of EVM, from an oversight perspective, is to provide a picture of the
status of a major capital investment at a “snapshot” in time. EVM calculations might enable
project managers, contractors, program managers, overseers, and outside stakeholders to see
quickly and simply whether or not a project is “on track” with its baseline plan. A negative
schedule variance or cost variance indicates a schedule slip or cost overrun, whereas positive
variances indicate a project is ahead of schedule or under budget. EVM metrics might, thereby,
suggest the need for additional analysis in targeted ways to explain the root causes of problems 36
and what could be done, if anything, to get a project back on track, or even provide evidence of
more fundamental management, resource, and oversight problems. EVM metrics do not,
however, necessarily tell the whole story, as discussed below.
The story that EVM tells can be a significant one. Once the basic jargon and concepts of EVM
are understood, EVM provides a visually simple picture, on a periodic basis, of whether or not a
project is on track with plans. For oversight purposes, EVM can thereby “flag” outliers and signal
the potential need to ask further questions. The prospect of this scrutiny might, in turn, provide
incentives to OMB and agencies to, among other things (1) closely monitor the progress of major
capital investments; and, more fundamentally, (2) build the capacity (e.g., resources, staff, skills,
management attention and discipline, and processes) for OMB and agencies to properly manage
investment-related activities and evaluations.
Nonetheless, without further evaluation of a project, EVM metrics might tell only a partial story.
For example, even when EVM metrics are accurately captured and portrayed, the metrics
typically will not reveal why a project might be experiencing schedule or cost variances. A
process or implementation evaluation might be necessary to address such questions (e.g., to verify
and complement explanations that are provided by contractors). In addition, other evaluation
methods (see Figure 1) are typically used to assess other significant and, sometimes, more
fundamental questions, including (1) whether a project is worth undertaking (or continuing); and
(2) after the investment process is completed, what impact an investment might be having on
achievement of an agency’s mission or the success of a public policy, compared to what would
have happened without the investment.
In addition, a project’s baseline plan and corresponding EVM data might not accurately represent
the cost and schedule that are most likely necessary for a project to achieve a particular
functionality. Cost, schedule, and functionality plans are, of course, made in the context of
complexity and uncertainty, which oftentimes can guarantee that a capital investment project will
not proceed according to plan. Alternatively, estimates of a project’s costs, schedule, 37
functionality, and benefits might be based on insufficient research and analysis. Furthermore,

35 See http://www.acq.osd.mil/pm/faqs/evbasics.htm.
36 A large number of other EVM metrics are used to manage projects and contracts and are not discussed in this report.
See the third question and answer at http://www.acq.osd.mil/pm/faqs/faq.htm for more information.
37 Independent validation of cost, schedule, functionality, and benefit estimates might bolster the confidence of some
(continued...)





the use of metrics can create powerful incentives, including perverse incentives.38 One project
management reference presents a humorous and skeptical, albeit seriously intended, scenario to
caution against too quickly accepting EVM metrics at face value as accurate representations of
the full picture. The scenario is worth excerpting at length—not as a portrayal of what is
typical—but rather as an illustration of an overseer interpreting EVM metrics; thinking about
incentives; and confronting the issue of what a “reasonable” plan should be for a project’s cost,
schedule, and functionality. Specifically, the author posits a scenario showing a
project is [quite substantially] ahead of schedule and underspent.... At first glance, this looks
wonderful. But ask yourself how the variance happened. There are three possible
explanations for how this project manager achieved the results shown:
1. Actual labor [costs] were considerably lower than expected and/or the people were more
efficient than anticipated.
2. The project team had a lucky break.” The team had expected to have to work really hard
to solve a problem, but it turned out to be very easy.
3. The project manager “sandbagged” his estimates. He padded everything, playing it safe.
If you believe situation 1, you will believe anything. It is unlikely that both variances would
happen at the same time. Situation 2 happens occasionally. When all the planets are
alignedabout once in a zillion years, you say. You bet!
Situation 3 is the most likely explanation. The project manager was playing it safe. And he
would tell you that there is no problem. After all, if he continues along this course, the
project will come in ahead of schedule and underspent, which means a manager will give
money back to the company. No problem.
But is that true? First of all, I can almost guarantee that the project manager won’t give any
money back. He will find a way to spend itby adding bells and whistles to the project, by
buying unplanned equipment and supplies, or by throwing one huge party! No sane project
manager wants to give the money back, because he knows that his next project will be cut.
However, suppose the manager did give back the money. Would that be okay? No. The
reason is that the organization would have lost the opportunity to use the money to fund
some other project.... But the money is available. It is tied up in the under-budget project. If
that project were rescheduled and rebudgeted, the money would be available to keep the
other project going, assuming that the [return on investment] is still justified.
The question is, naturally, What is reasonable? We certainly cannot expect to have zero
variances in a project. And this is true. It all depends on the nature of our [commercial
sector] business. Well-defined construction projects can be held to very small tolerancesas
small as plus-or-minus 3 to 5 percent. Research and development projects are likely to run
higher tolerances, perhaps in the range of 15 to 25 percent. Each organization has to develop 39
acceptable tolerances from experience.

(...continued)
estimates, albeit with additional cost to perform any such evaluations.
38 For more on perverse incentives, see Steve Kerr, “On the Folly of Rewarding A, While Hoping for B,Academy of
Management Journal, vol. 18, no. 4, 1975, pp. 769-783.
39 James P. Lewis, The Project Managers Desk Reference, 2nd ed. (Boston: McGraw-Hill, 2000), pp. 208-211.





Whether the author’s scenario and rules of thumb are fully applicable to federal government
projects generally, much less in specific policy and technology domains, is debatable.
Nevertheless, his scenario illustrates some of the psychological and implementation phenomena
that might be at play in the context of major capital investments. In a federal government context,
phenomena like these might be at play with any participating actor (e.g., advocate, critic, project
manager, funder, contractor, overseer) at any point in the process of researching, proposing,
funding, executing, operating, and evaluating major capital investments.
For example, who says the budget and planned functionality corresponding to a proposed project
are accurate? What analyses are the judgments based upon? Are cost and benefit estimates
reasonable? Who might have the capacity and independence to make such an assessment? Do
agencies, inspectors general, or OMB have the staff resources to perform adequately the
necessary analytical and monitoring tasks for major capital investments? Are the President,
political appointees, and career civil service executives making decisions based on, or in spite of,
reasonable and objective analysis and monitoring? What are the real reasons a project is under or
over budget, or ahead of or behind schedule? How much scrutiny is too much? What perverse
incentives and consequences might be created when project managers make good faith attempts
to estimate costs, schedules, and functionality, but variances inevitably occur, perhaps due to
uncertainty or bad luck, bringing high levels of distracting scrutiny to a project and its sponsoring
agency or agencies?
Questions such as these are difficult to answer, but suggest that answers and corresponding
lessons might be learned for specific policy areas and types of investments over time and with
experience. This observation raises two further questions. For each type of actor involved in
planning, funding, implementing, evaluating, and overseeing major capital investments (e.g.,
project managers, program managers, political appointees, career executives, congressional staff,
Members of Congress, and the President), what are the necessary prerequisites to learning these
lessons from experience? That is, what capacities and processes need to be in place in order to
learn from experience? How many of these prerequisite capacities and processes are currently in
place?

Major capital investments give rise to many actual or potential issues for Congress. Issues that are
potentially related to EVM are a significant subset of this larger picture. With the increasing
emergence of EVM as a tool for gauging the status and progress of major capital investments,
potential congressional oversight issues appear likely to continue emerging, as well. This report
culminates with a sampling of potential oversight questions for Congress that draw on the report
and other resources, as noted.
• In light of what appears to be more widespread adoption of EVM practices for
major capital investments by executive agencies and their contractors, what
might be the advantages, disadvantages, costs, and benefits of legislatively
requiring more formal, periodic, and publicly accessible agency reporting of
EVM metrics for some, most, or all major capital investments?
• To what extent might recent congressional efforts to increase budget and
financial transparency—for example, the establishment of a searchable website
for federal contracts and grants required by the Federal Funding Accountability
and Transparency Act of 2006 (FFATA; P.L. 109-282; 120 Stat. 1186)—be





viewed as an illustration of what could be done with EVM information?40 What
would be the advantages and disadvantages?
• The usefulness of EVM metrics depends upon the quality and reliability of the
cost, schedule, and functionality data that underlie both a project’s baseline plan 41
and its reports on work performed. Given concerns that have been raised about
the quality of agency capital investment business cases that are used to justify 42
budget requests, as well as concerns that have been raised about the quality of 43
agency project management, what might be options for Congress to cause OMB
and agencies to improve any deficiencies in the underlying analyses and data that
support capital investment decision making and reporting?
• What might be the advantages and disadvantages of requiring independent,
publicly accessible assessments—including verification and validation, as
appropriate—of the underlying quality of EVM data, including cost estimates
and assessments of the functionality that is delivered in comparison with what
was planned?
• What might be the advantages and disadvantages of requiring independent,
publicly accessible assessments of the capacity of agencies to manage the capital
planning and investment control (CPIC) process (e.g., agency capacity to
research, formulate, plan, propose, develop, report on, operate, and evaluate
major capital investments), which is arguably more fundamental as a driver of
EVM data quality? Would assessments of key capabilities be amenable to
systematic oversight through the use of “scorecards,” similar to those used under
the Bush Administration’s President’s Management Agenda, albeit with
transparency outside the executive branch and independent validation? To the
extent that OMB currently identifies gaps in agency capabilities, to what extent
do annual budget requests (submitted to Congress by the President) and
performance plans (produced by agencies under the Government Performance
and Results Act of 1993, or GPRA; P.L. 103-62) address the gaps?
• Processes that support the capital investment process—including development of
capital investment business cases, project management, EVM, contract oversight,
management auditing, and evaluation—depend for success, in large part, upon 44
adequate staff and skill resources in agencies. Have agencies allocated

40 For background on that law, see CRS Report RL33680, The Federal Funding Accountability and Transparency Act:
Background, Overview, and Implementation Issues, by Garrett Hatch.
41 For discussion intended for the federal audit community, see U.S. Government Accountability Office, Cost
Assessment Guide: Best Practices for Estimating and Managing Program Costs, Exposure Draft, GAO-07-1134SP,
generally and also chap. 18.
42 See U.S. President’s Council on Integrity and Efficiency, U.S. Executive Council on Integrity and Efficiency, Fiscal
Years 2006 and 2007 Assessments of Federal Agencies Exhibit 300s: A Compilation Prepared for the Office of
Management and Budget, Mar. 2007, available at http://www.ignet.gov/randp/rpts1.html; and U.S. Government
Accountability Office, Information Technology: Agencies Need to Improve the Accuracy and Reliability of Investment
Information, GAO-06-250, Jan. 2006.
43 For example, see U.S. Department of Education, Office of Inspector General, Effectiveness of the Departments
Financial Management Support System Oracle 11i Re-Implementation, ED-OIG/A11F0005, June 26, 2007, available at
http://www.ed.gov/about/offices/list/oig/auditreports/a11f0005.doc.
44 U.S. Chief Information Officers Council, Information Technology (IT) Workforce Capability Assessment Survey
(2004), Analysis of Survey Results, Dec. 2004, available at http://www.cio.gov/
index.cfm?function=specdoc&id=545&category=11.





sufficient staff and financial resources to successfully undertake major capital
investments? How might such judgments be made? Also, do agencies have
sufficient ability to recruit, retain, and train personnel in disciplines including
project management, risk management, contract management and oversight,
information technology, and program evaluation? If so, what are their track
records in recruiting, retaining, training, and allocating staff in sufficient number
and competency in the management and oversight of major capital investments?
If not, what could be done?
• What might be the advantages, disadvantages, costs, and benefits of applying
EVM-related requirements to major capital investments, as appropriate, pursued
by agencies in the legislative and judicial branches?
Clinton T. Brass
Analyst in Government Organization and
Management
cbrass@crs.loc.gov, 7-4536