SlideShare a Scribd company logo
1 of 17
Download to read offline
Assessing and improving partnership relationships and outcomes:
a proposed framework
Jennifer M. Brinkerhoff*
Department of Public Administration, The George Washington University, Washington, DC 20052, USA
Abstract
To date, no evaluation frameworks are specifically targeted at evaluating partnership relationships, as opposed to partnership programmatic
outcomes. Following a discussion and definition of partnership, its defining features, and value-added, the paper proposes a framework for
assessing partnership relationships in order to: (1) improve partnership practice in progress, (2) refine and test hypotheses regarding
partnership’s contribution to performance and outcomes, and (3) suggest lessons for future partnership work. The proposed assessment
approach is continuous, process-oriented and participatory, and developmental. Targets of assessment include compliance with prerequisites
and success factors, degree of partnership practice, the outcomes of the partnership relationships, partners’ performance, and efficiency.
Indicators and associated methods are proposed for each of these. The framework addresses the evaluation challenges of integrating process
and institutional arrangements into performance measurement systems, thus contributing to relationship performance as well as program
outcomes. It can also be used to enhance the theory and practice of partnership. q 2002 Elsevier Science Ltd. All rights reserved.
Keywords: Partnership; Framework; Organization
1. Introduction
Throughout the public, private, and non-profit sectors,
there is increasing experimentation with the use of partner-
ships, alliances, and networks to design and deliver goods
and services. Partnership, in particular, is touted as the
answer to many public service challenges.1
However, it
remains unclear whether or not partnership actually
enhances performance, and if so, how? The increase in the
rhetoric and practice of partnership is based on the
assumption that partnership not only enhances out-
comes—whether qualitatively or quantitatively, but it also
results in synergistic rewards, where the outcomes of the
partnership as a whole are greater than the sum of what
individual partners contribute. Some research supports
partnership’s contribution to improved performance.2
However, most evidence of inter-organizational partner-
ships’ contributions to performance is anecdotal, except in
some private sector alliances, where increased efficiencies
can be quantified (Shah & Singh, 2001). In short, synergistic
results are often sought and referenced, but they are rarely
fully articulated and measured (Dobbs, 1999). Furthermore,
the process or how to of creating such synergistic rewards
are more hopeful than methodical or well understood.
Under the new public management, evaluation most
often concentrates on results or outcomes. While these
are important in ensuring responsiveness, accountability,
and quality, they do not tell us much in terms of how to
improve public service delivery and enhance efficiency,
especially when results are disappointing. Recent inno-
vations in the private sector underscore the shortcomings of
over-emphasizing or looking exclusively at outcomes, e.g.
financial performance, and ignoring process dimensions.3
One danger is sacrificing long-term value creation for short-
term performance (Kaplan & Norton, 2001). From a
pragmatic perspective, focusing only on results is simply
not an effective management approach. While outcomes
may be ‘valid as infrequent indicators of the health of entire
systems’, they are not useful for ‘making tactical decisions’
or interpreting performance within shorter time frames
(Schonberger, 1996, p. 17).
0149-7189/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved.
PII: S0149-7189(02)00017-4
Evaluation and Program Planning 25 (2002) 215–231
www.elsevier.com/locate/evalprogplan
* Tel.: þ1-202-994-3598; fax: þ1-202-994-6792.
E-mail address: jbrink@gwu.edu (J.M. Brinkerhoff).
1
Partnership is distinguished from other relationship types according to
two defining dimensions, mutuality and organization identity, discussed
later.
2
For example, Ellinger, Keller, and Ellinger (2000) studied the
relationship among departments internal to organizations. They examined
the relationship between interaction (meetings and information exchange)
and collaboration (teamwork, sharing, and the achievement of collective
goals), and performance. They found that while both are positively
associated with performance, more specifically, collaboration mediates the
relationship between interaction and performance.
3
For a brief review of these, see Kaplan and Norton (1992, 1996) and
Schonberger (1996).
Good evaluation practice suggests that, ideally, evalu-
ation takes into account all key factors that may influence
outcomes. This would encompass the institutions and
incentives governing the implementation of policies and
programs, including informal rules, regulations, controls,
and structures (Squire, 1995). These dimensions are crucial
components of cause-and-effect linkages that comprise
strategy and ultimately lead to performance outcomes
(Kaplan & Norton, 2001). Unfortunately, addressing
process and institutional arrangements is among the most
common difficulties associated with performance indicators
and performance monitoring (Funnell, 2000).
The fad of performance management frequently ignores
these elements. There are several reasons for this; among
them are two evaluation challenges, and one reality. First,
processes and institutional arrangements are not only
difficult to measure, they are sometimes difficult to identify
and articulate. Varying degrees of formalization, organiz-
ation culture, and the broader social culture, including
personal relationships, may yield exceptions to planned
procedure. Understanding the gap between implementation
plans and actual operations can be difficult. In addition,
indicators of such process and institutional features are not
easily quantified. These systems are also dynamic, necessi-
tating continuous review with periodic adjustments in
targets of analysis and program theory assumptions. The
second challenge is one of attribution. How can we know
that this particular process or institutional arrangement
causes this particular outcome? Or is even associated with
it? Outcomes are separated causally and temporally from
such inputs; attribution requires a sophisticated investi-
gation of cause-and-effect relationships that may entail
multiple intermediate stages. Even after such analysis,
attribution may be problematic.
One reality of why processes and institutional arrange-
ments are relatively ignored in the current emphasis on
performance measurement is simply that they are not
immediately exciting. They are not immediate, in that it
takes time for these arrangements to become institutional-
ized and some element of them will always remain dynamic.
They are not exciting because in the eyes of direct program
beneficiaries, they are less important than the outcomes
themselves. While the general public may persist in this
prioritization, it behoves public managers to become more
technical and scientific about the way they assess and
improve public programs as a means to enhancing these
outcomes that are so valued. Do we expect anything less
from the private, commercial sector? Consumers may
continue to vote with their dollars, but investors want to
know that companies are internally efficient and effective,
enabling them to sustainably produce valued goods and
services, respond to changes in the marketplace, and pursue
innovation. Similarly, public managers and policymakers
must be accountable not only to the recipients of public
goods and services, but also to tax payers, who want to know
that those goods and services have come at an efficient price.
The purpose of this paper is to propose a framework for
assessing partnership work in progress, with an eye to
improving partnership practice as a means to enhancing
outcomes. Such a task will necessarily entail different layers
of assessment, with slightly varying purposes. First, a
developmental evaluation approach, i.e. one that seeks to
improve work in progress, will be used to dialectically
determine indicators, collect data, and assess partnership
practice. This approach aims to ensure good partnership
practice—consistent with our general knowledge of what
partnership means, in order, second, to support a theory-
based evaluation, which seeks to test the theory that
partnership contributes to performance. Together, the two
approaches will help to maximize the effectiveness of the
partnership in progress, and in the event the program is not
successful, help preclude assumptions that ineffectiveness
of the overall program is attributable to theory failure, as
opposed to process failure (Birckmayer & Weiss, 2000). In
this sense, we need to examine partnership both as a means
and an end in itself. The proposed assessment approach and
its application seek to: (1) improve partnership practice in
the context of program implementation; (2) refine and test
hypotheses regarding partnership’s contributions to per-
formance and outcomes; and (3) suggest lessons for future
partnership work in order to maximize its potential to
enhance outcomes.
The paper begins with a brief description of the nature
and definition of partnership, followed by a review of
existing conceptual frameworks that may be useful in
assessing partnership work. The proposed framework is then
presented, including the general approach, a discussion of
what to measure, and a proposed methodology.
2. The nature of partnerships4
Partnership is promoted both as a solution to reaching
efficiency and effectiveness objectives, and as the most
appropriate relationship as defined by value-laden prin-
ciples. Based on a review of the literature (Brinkerhoff,
2002b), the ideal type of partnership can be defined as
follows:
Partnership is a dynamic relationship among diverse
actors, based on mutually agreed objectives, pursued
through a shared understanding of the most rational
division of labor based on the respective comparative
advantages of each partner. Partnership encompasses
mutual influence, with a careful balance between synergy
and respective autonomy, which incorporates mutual
respect, equal participation in decision-making, mutual
accountability, and transparency.
There are three obvious problems with these ideal-type
4
This section draws heavily from Brinkerhoff (2002a).
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
216
definitions: (1) the extent to which they can be operational-
ized is unclear; (2) they may not be universally appropriate;
and (3) their justification is subjective and values-based. It
is, therefore, more appropriate to examine partnership
practice on a relative scale, according to more specific
definitional dimensions. This allows us to examine empiri-
cally the extent to which an inter-organizational relationship
is operating like a partnership. Henceforth, the term
partnership is used to describe this relative practice.
Literature and experience combine to suggest that two
dimensions are salient for defining partnership and dis-
tinguishing from other relationship types. Mutuality
encompasses the spirit of partnership principles; and
organization identity captures the rationale for selecting
particular partners, and its maintenance is the basis of
partnership’s value-added. Mutuality can be distinguished
as horizontal, as opposed to hierarchical, coordination and
accountability, and equality in decision-making, as opposed
to domination of one or more partners. Additional principles
(from the ideal-type partnership) include jointly agreed
purpose and values; and mutual trust and respect. Mutuality
does not imply equal power relations. However, it does seek
to highlight the indispensability of each partner (based on
organization identity below), which can assist traditionally
weaker partners to advocate for greater equality in decision-
making. Mutuality refers to mutual dependence, and entails
respective rights and responsibilities of each actor to the
others (Kellner & Thackray, 1999). These rights and
responsibilities seek to maximize benefits for each party,
subject to limits posed by the expediency of meeting joint
objectives.
Organization identity generally refers to that which is
distinctive and enduring in a particular organization. It is
generally believed that the creation and maintenance of
organization identity is essential to long-term success
(Albert & Whetten, 1985; Gioia, Schultz, & Korely,
2000). The key is not necessarily to maintain organization
systems, processes, and strategies over time, but to maintain
the organization’s core values and constituencies. Organiz-
ation identity can be examined at two levels. First, the
maintenance of organization identity is the extent to which
an organization remains consistent, committed, accounta-
ble, and responsive to its mission, core values, and
constituencies. Second, from a broader institutional view,
organization identity also refers to the maintenance of
characteristics—particularly comparative advantages—
reflective of the sector or organizational type from which
the organization originates. A primary driver for partnership
is accessing key resources needed to reach objectives, but
lacking or insufficient within one actor’s individual
reserves. While each actor has their own unique portfolio
of assets and skills, generalizations can be made with
respect to the comparative advantages of particular types of
actors.
Both internal and external perceptions of organization
identity are important. Internally, a strong sense of
organization identity is an essential component of
organization effectiveness, particularly with respect to
staff commitment and motivation. Externally, two
sources of perceptions are salient. First, organizational
success is dependent upon the perceptions of the
organization’s constituents. Performance definitions
increasingly stress assessment from the constituents an
organization seeks to benefit (Fowler, 1997). Second,
the basis for partnership’s value-added is accessing what
external partners perceive to be unique contributions
(see later).
These defining dimensions help to distinguish part-
nership from other relationship types. Other relation-
ships may emphasize only one dimension. For example,
contracts typically seek to exploit organization iden-
tity—purchasing the unique advantages of a particular
organization, but incorporate little mutuality, with the
terms of the contract determined in advance by the
purchasing organization.5
Another common relationship
variation is extension, where mutuality may be high, but
over time a significant blurring between the organiz-
ations develops, where one or more can be said to have
lost their organization identity. Such relationships have
been documented with respect to non-profits partnering
with government (Lipsky & Smith, 1989–1990), donors
(Hulme & Edwards 1997), and the private sector
(Murphy & Bendell 1997). Finally, relationships charac-
terized by low mutuality and low organization identity
can be seen as co-optation or gradual absorption. These
relationships may begin as partnerships but lose these
dimensions over time. The exercise of power is inherent
to inter-organizational relationships. Lister (2000)
argues that power can be exercised to shape the needs
of others, influencing them to pursue behavior in the
interests of the power-holder. Such dynamics complicate
the identification of partnership practice, confirming the
need for broad and diverse participation in assessment
processes.
Partnership’s defining dimensions form the basis for
its value-added. Organization identity is the foundation
for partnership. Partnerships with other actors are
pursued precisely because these actors have something
unique to offer, whether this is resources, skills,
relationships, or consent. If organization identity is
lost, by definition comparative advantages are lost, the
organization loses legitimacy in the eyes of its defined
constituencies, and its effectiveness wanes. Absorption,
co-optation, bureaucratic creep, or, more broadly, the
infiltration of one organizational culture into another—
can all lead to a diminished capacity of a partner to
maximize its contribution in the longer run (Edwards,
5
Contracts do not always violate the mutuality dimension. As a legal
mechanism, a contract can be used to confirm mutually determined
agreements in support of a partnership.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 217
1996). There is no longer a strong rationale to justify
the extra effort required for partnership.6
Mutuality can reinforce as well as maximize the benefits of
organization identity. The opportunity to participate and
influence equally means that each actor can more easily
protect its organization identity, and hence the efficiency,
effectiveness, and synergistic rewards of the partnership. No
one organization can understand the implications of its or the
partnership’s actions for members’ organization identity.
Mutualityatleastaffordspartnerorganizationstheopportunity
to consider and explain these implications and potentially
defendtheirdistinctive advantages, skills,andlegitimacy—all
of which are necessary for the partnership’s success. Mutual-
ity also affords opportunities for partner organizations to
contribute their skills and other advantages as needed. With
mutuality, partners can more easily raise new ideas and
propose new, more effective approaches. Mutuality enables
partners to contribute to the partnership with fewer constraints
(e.g. approvals, scrutiny, regulation and other forms of
interference) and greater legitimacy. In addition, mutuality
can help to ensure acceptance of the partnership’s policy and
procedures, and ease their implementation, when each actor
has agreed to them and feels a sense of ownership.
Partnerships, like any relationship, are dynamic. While
accessing the unique contributions of other actors (i.e. their
organization identity) is the primary driver for pursuing
partnership, over time other motivators and de-motivators
may develop. Especially when partnerships are initiated
among partners without any previous history (or where the
history may have been conflictive or competitive), the
partnership may begin with highly specified roles and
responsibilities, with an effort to minimize mutual depen-
dence. Starting small, the relationship may then evolve into
more complex interactions and inter-dependencies as partners
develop mutual understanding and trust. In this sense, the
initial drivers for partnership may expand to encompass
emerging as well as newly recognized opportunities, which
actors may now perceive as lower risk. Alternatively, the
initial drivers may dissipate if partnership dimensions
(organization identity and mutuality) are not maintained, or
if contextual factors render them less relevant.
3. Existing conceptual frameworks for assessing
partnership work
While the evaluation and performance management
literature is replete with discussions of measuring outcomes
and results, there is very little written about evaluating or
assessing partnership relationships themselves. For
example, Provan and Milward (2001) propose a framework
for evaluating public sector networks at three levels: the
community, the network, and the organization/participant.
At the network level, they mainly suggest structural targets
of analysis (e.g. number of partners, and multiplexity, or
number of connections between organizations), or the
outcomes of the network (e.g. the range of services
provided). Their framework does little to address the quality
of the relationship among organization members and how it
can be improved to contribute more effectively to outcomes.
Particular fields are struggling with evaluating partner-
ship relationships, with some lessons offered. In the health
field, identified assessment criteria include:
willingness to share ideas and resolve conflict, improved
access to resources, shared responsibility for decisions
and implementation, achievement of mutual and indi-
vidual goals, shared accountability of outcomes, satis-
faction with relationships between organizations, and
cost effectiveness (Leonard, 1998, p. 5).
In the education field, the Ford Foundation Urban
Partnership Program provides a process example for
assessing partnership relationships (Rendon, Gans, &
Calleroz, 1998). While the initial evaluation framework
was more directly performance based, an assessment
component was later added to examine the history and
development of the partnerships, and lessons for partnership
design and implementation, among other things. This
assessment pursued a process approach, where partner
stakeholders determined and mutually agreed on their own
indicators for partnership work.
From a more general sectoral perspective, the nature of
private goods and their bottom line market prices and
quantifiable cost structures enable the private sector to rely
on straightforward quantitative data sources. For example,
Shah and Singh (2001) outline a model for evaluating the
performance of supply chains that relies on the time length
of various stages in the supply chain, the cumulative cost
addition for the raw material, and proportionate cost
addition for the various stages, culminating in a cost profile.
While the division of labor in partnerships for public service
delivery is rarely so straightforward and sequential, such
frameworks do suggest identifying quantifiable indicators,
where appropriate, to ensure that the added-costs of the
relationship do not outweigh its value-added. CIVICUS (the
World Alliance for Citizen Participation) looks specifically
at civil society and social capital, examining four facets:
structure, values, space, and impact (CIVICUS, 2001). This
framework is a reminder that partnership approaches are a
valued end in themselves. In order to assess their contribution
and efficiency, it is necessary to look not only at their structure
and impact, but at their operating values as well.
6
Organizations may influence their partners’ organization culture,
whether consciously or not, and this influence may be mutual and even
desirable. In developing a partnership identity, for example, partner
organizations cultivate a shared understanding of the partnership and its
vision, in addition to a partnership organization culture. However, to
maintain partnership’s value-added, partners must take care to maintain
their unique identity and contribution over time, implying that their
organization cultures will remain somewhat distinct, particularly in
activities beyond the scope of the partnership.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
218
A specific framework developed for assessing partner-
ships (particularly in international development) comes
from the US Agency for International Development’s
(USAID) work on inter-sectoral partnerships and the New
Partnership Initiative (Charles & McNulty, 1999). Drawing
upon the work of other assessment and evaluation tools,7
the
proposed framework identifies three dimensions for assess-
ment: values and capacity, process, and impact. While some
dimensions of the framework (organizational capacity and
culture, and communication processes) are immediately
relevant to the task at hand, the overall framework
emphasizes impact and the external environment to an
extent that is beyond the scope of the partnership
relationship assessment proposed here. As with the Ford
example noted earlier, Charles and McNulty (1999)
recommend that member partners participate in determining
and selecting precise indicators for each of the assessment
categories. They further confirm that partnership indicators
are more likely to be qualitative and subjective than
quantitative and objective.
The discussion-oriented self-assessment (DOSA) Tool
(Levinger & Bloom, 1997) demonstrates that industry
benchmarks can be established despite the incorporation
of self-determined, contextual indicators. DOSA combines
a framework of identified capacity targets with self-assessed
baseline data, and goal setting. DOSA’s application to a
number of US private voluntary organizations has yielded
industry benchmark data over several years.8
While there
are methodological limitations to this approach, it has
arguably encouraged organizations to participate and
engage in comparative performance analysis, potentially
improving the performance of the industry more broadly.
Each of these examples offers lessons in terms of
assessment targets and processes. However, none of them is
specifically designed and articulated to emphasize the
assessment of the partnership relationship as a means to
performance outcomes, with an eye to maximizing the
effectiveness of partnership relationship practice.
4. The proposed assessment approach
Evaluation theory and practice has evolved substantially.
It now encompasses process and implementation evalu-
ations, in addition to impact and end of project evaluations,
and it honors the use of a range of research designs and
methods whose choice is driven by evaluation purpose
(Dym & Jacobs, 1998). This latter evolution is largely
credited to Patton’s (1997) promotion of utilization-focused
evaluation. Under this approach, evaluation is judged
according to its utility and actual use. This necessarily
means that utilization-focused research is highly personal
and situational—it emerges through a dialogue with
intended users as to their objectives, and the most mean-
ingful (i.e. useful) indicators and means of collection and
measurement (Drucker, 1990).
The proposed assessment approach is continuous,
process-oriented and participatory, and developmental,
where the assessor assumes the role of a critical friend.
The word ‘assessment’ is intentionally chosen over
evaluation, as assessment suggests an investigative process
that is more exploratory and developmental than confirma-
tory (Rendon et al., 1998). The assessment is process-
oriented both in the sense that it examines the processes by
which partners interact and provide goods and services, i.e.
focusing on actual operations and internal dynamics, and in
the sense that the specifics of the framework design and
implementation are themselves the result of process.
Generally, within the parameters of what needs to be
measured (to be discussed later), partners determine specific
criteria and priorities themselves. Self-stated criteria are
particularly important with respect to partners’ goals and
indicators of success (Poulin, Harris, & Jones, 2000). Self-
determination is a common practice for evaluations aiming
to improve performance (Huebner, 2000; Levinger &
Bloom 1997; Rendon et al., 1998).9
A process approach
serves additional functions as well; it brings conflict into the
open, provides a common platform for agreement, and
increases the legitimacy of proposed measures (NORAD,
1989).
Developmental evaluation (Patton, 1997) refers to
evaluation in the context of ongoing program or organiz-
ational development. Essentially, the evaluator acts as an
organization development consultant, applying evaluative
logic to performance assessment and improvements. The
purpose of the evaluation is to support program/project,
staff, and/or organizational development. According to
Patton (1997), ‘The evaluator is part of a team whose
members collaborate to conceptualize, design, and test new
approaches in a long-term, ongoing process of continuous
improvement, adaptation, and intentional change’ (p. 105).
Clear, specific, and measurable goals are not the basis up-
front, since these can be limiting, goals may vary among
team members, and the direction is not necessarily known in
advance. Most importantly, the goals of the intended users
must form the basis for evaluative indicators and methods.
The critical friend model (Rallis & Rossman, 2000)
emphasizes this implicit learning function. The objective of
the critical friend model is to open the dialogue and blur the
7
These include the Inter-American Foundation’s (IAF) (1999) Grassroots
Development Framework; and the Discussion Oriented Organizational Self-
Assessment (DOSA), developed by the Education Development Center and
Pact with assistance from USAID (Levinger & Bloom, 1997).
8
The comparative data is available at: http://www.edc.org/INT/CapDev/
dosafile/findings.htm (accessed February 18, 2002). The identity of
participating organizations is protected.
9
The challenge of self-determined criteria is to sufficiently inform
participants’ selection with the evaluator’s expertise, particularly regarding
indicator categories and comparative approaches. The effectiveness of this
approach depends upon the participants’ respect for the evaluator’s
expertise and acceptance of the general framework. Such respect and
agreement can divert potential conflict among participants.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 219
borders between the act of evaluation and the program being
evaluated (Greene, 1990; qtd. in Rallis & Rossman, 2000).
Through dialogue, new knowledge is produced collectively.
The role of the assessor is to generate data and encourage
interpretations that foster learning. At the same time, the
assessor must adopt a critical stance that is ‘willing to
question the status quo and demand data to guide ethical
decisions about change’ (Rallis & Rossman, 2000, p. 84).
The relationship of the critical friends (the assessor and
partnership actors) is intended to be equitable and
reciprocal, where the traditional evaluation power relation-
ship is deliberately blurred, and all recognize the unique
contributions of others. With this developmental and
dialectic approach in mind, the assessment process aims to
improve partnership work in progress.
5. Assessment targets
Before discussing the general parameters of what should
be assessed in a partnership relationship, an important
caveat must be mentioned. Because many benefits of
partnership work derive from the relationship itself, and
because all relationships are dynamic, partnership assess-
ment should be seen as an evolving process. While
movement is not automatically uni-directional (i.e. always
leading towards a positive direction) the potential for
partnership’s added value tends to develop over time and
experience. This means that partnership cannot be expected
to yield immediate results, though this may occur. More
likely, it is as partners become more familiar with each
other’s strengths, weaknesses, operations, and representa-
tives that synergistic rewards will emerge. This process not
only entails an increase in mutual understanding, but also
trusts building. Since partnerships are dynamic, they have
the potential to yield different costs and benefits at different
stages of their development. Furthermore, as they become
more effective and institutionalized relationships, one
should expect a gradual shift in emphasis within the
partnership work, from being activity-driven to becoming
more strategic, looking and planning for opportunities to
yield synergistic rewards. This caveat suggests humility in
the expectations of what partnership can deliver in the short-
term, and the need for diligence in ensuring that partnership
dynamism moves in a positive direction, toward greater
understanding, trust, and consequent efficiencies. It also
confirms the need to periodically revisit and possibly
redesign assessment indicators and processes.
The assessment approach seeks to test the theory that
partnership can produce an added value beyond other
relationship types. Partnership evaluation to date has
primarily focused on the causal chain in Fig. 1.
The proposed framework emphasizes relationship out-
comes, examining the causal chain in Fig. 2.
These relationships are moderated by the partnership’s
continuous incorporation of success factors and its
efficiency.
Accordingly, five general areas of assessment are
proposed: (1) compliance with prerequisites and success
factors in partnership relationships, (2) the degree of
partnership practice, (3) outcomes of the partnership
relationship, (4) partners’ performance, and (5) efficiency.
These five categories are inextricably linked and some
overlap. Targets of analysis for each area, along with
proposed evaluative methods are specified in Table 1. These
targets constitute the general framework within which
participants will negotiate and determine precise indicators
most meaningful to them. A summary of each target area
appears in the boxes later.
5.1. Pre-requisites and success factors of partnership
relationships
Pre-requisites to effective partnership relationships
(Box 1) include partners’ tolerance for sharing power, and
willingness to adapt their operations and procedures to
facilitate the partnership’s performance (Brinkerhoff,
2002b). The presence of a partnership champion (and
whether champions exist within each partner organization)
is another facilitative factor. Champions are entrepreneurial
individuals who advocate on behalf of the partnership and
the partnership approach within their home organizations,
Fig. 1. Traditional causal chain.
Fig. 2. Causal chain for relationship outcomes.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
220
Table 1
Summary of proposed assessment targets and methods
Category/targets Methodology
I. Presence of prerequisites and success factors Partner interview
A. Pre-requisites and facilitative factors Partner survey
† Perceptions of partners’ tolerance for sharing power
† Partners’ willingness to adapt to meet partnership’s needs
† Perception of receptivity to new solutions to improve the partnership,
its value, and day-to-day performance
† Speed and flexibility in addressing the need for corrective action
† Accommodation of special requests among the partners
† Responsiveness of partners to unforeseen situations
† Existence of partnership champions
† Existence of champions within each partner organization and within the
partnership as a whole
† Focus of champion’s advocacy (internal to a partner organization, within the
partnership, externally)
B. Success factors from the literature
† Trust Partner interview
† Character-based: perceptions of integrity, honesty, moral character, reliability,
confidentiality as appropriate, etc.
Partner survey
† Competence-based: perceptions of competence in prescribed/assumed skill areas,
business sense, common sense, judgment, knowledge, interpersonal skills,
understanding of partnership, etc.
† Confidence: standard operating procedures, contractual agreements and their
degree of formality
† Senior management support
† Direct participation
† Provision of resources and support to organization members participating
in the partnership
† Ability to meet performance expectations
† External constraints
† Partner capacity
† Clear goals Partner survey
† Consistent identification of partnership goals and mission
† Regular partner meetings to review, revise, and assess progress in meeting
identified goals
† Shared common vision for the partnership
† Mutually determined and agreed partnership goals
† Partner compatibility Partner interview
† Knowledge and understanding of partners’ mission, operations, and constraints Partner survey
† Previous conflict or confrontations among partners
† Compatible operating cultures (e.g. operating philosophies, management styles,
teamwork)
Process observation and assessment
Partner survey
† Compatible constituencies Partner survey
† Compatible core values
† Mechanisms to address incompatibilities
† Conflict
† Degree
† Frequency
† Extent of conflict avoidance within partnership
† Presence/absence of one or more dominating partners
II Degree of partnership Process observation and assessment
Partner identification and assessment of indicators
A. Mutuality
† Mutuality and equality Partner interview
† Equality in decision making Partner survey
(continued on next page)
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 221
Table 1 (continued)
Category/targets Methodology
† Democratic procedures
† Satisfaction that all views are considered
† Joint determination of program activities and procedures
† Process for determining division of labor and risk/reward balance Partner interview
† Resource exchange Process observation and assessment
† Relative balance Partner survey
† Nature of resources exchanged
† Reciprocal accountability Partner survey
† Regular reporting among partners
† Access to performance information
† Financial controls balanced with administrative imposition
† Joint design of evaluations/assessments
† Transparency Partner survey
† Established channels for continuous dialogue and information sharing
† Timely response to information requests
† Sharing of relevant information beyond specified agreements/requirements
† Partner representation and participation in partnership activities
† Participation in planning and review meetings
† Program activities
† Partner satisfaction with opportunity to participate
† Rules governing who can represent the partnership, within what limits Partner interview
† Mutual respect Partner survey
† Consideration of partners and convenience in the planning of meetings
and other organizational requirements
† Recognition of indispensability of each partner, including unique strengths
† Shared understanding of respective partner drivers
† Even benefits
† Perception of fairness
† Satisfaction with benefit distribution
† Satisfaction with the criteria for benefit distribution
B. Organization identity
† Determining partner organization identities Partner interview
† Mission
† Major strengths and weaknesses
† Primary constituents
† Underlying values
† Organization culture
† Methods for assessing mission attainment and maintenance of all of the above
† Organization identity within the partnership
† Perception of threats or compromises of organization identity within the partnership
† Nature of organization adaptations/adjustments in order to effectively promote and
participate in the partnership
† Perception of partners adjustments in response to expressed concern about
organization identity
† Extent to which organization has changed as a result of partnership participation
and quality of that change
† Influence of partnership work on partner organizations’ service quality and
responsiveness to core constituencies
Partner survey
† Influence on and use of core constituencies
† Perceptions regarding the extent of mutual adaptation
† Perceptions of overall impact of partnership work on organization identity
III. Outcomes of the partnership relationship
1. Value-added Partner interview
† Qualitative synergistic outcomes of program Partner survey
† Quantitative synergistic outcomes of program Process observation and assessment
(continued on next page)
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
222
within the partnership as a whole, and externally. Cham-
pioning capacity not only entails communication, nego-
tiation, and organizational skills, but also perceived
legitimacy among partners and stakeholders.
Building from the literature, partnership effectiveness
can be gauged by the extent to which the relationship
complies with identified best practice. This is somewhat of a
controversial assessment target, since many of these
features lack empirical support. However, there is some
emerging consensus on what at least some of these success
factors may be. For example, Whipple and Frankel (2000)
surveyed business leaders in the food, and health and
personal care industries regarding their conceptions of
alliance success factors. Of a list of 18 factors generated
from an extensive literature review, they found general
consensus around five (though the ordering of these varied).
These five factors were: trust, senior management support,
ability to meet performance expectations, clear goals, and
partner compatibility. An additional factor, which merits
examination, is conflict. Several of the success factors can
be deconstructed to enhance their specificity and explana-
tory value (Box 2).
Trust can be based on either the character or the
competence of participating individuals and organizations
(Gabarro, 1987), and can also be distinguished from
confidence. Trust is voluntary, linked to shared values,
and is distinct from and potentially incompatible with
confidence (Tonkiss & Passey, 1999). Contrary to an ethical
Table 1 (continued)
Category/targets Methodology
† Linkages with other programs and actors
† Enhanced capacity and influence of individual partners
† Other multiplier effects
2. Partners meet own objectives Partner interview
† Satisfaction with progress in meeting identified drivers Partner survey
† Qualitative and quantitative evidence of meeting drivers
† Enhanced performance in pursuing own mission
† Enhanced performance in satisfying constituencies
3. Partnership identity
† Partnership organization culture Process observation and assessment.
† Values
† Partnership mission, comparative advantages, value-added Partner interview
† Name recognition (e.g. stakeholder feedback, publicity, logo, web page) Process observation and assessment.
† Partnership constituencies
IV. Partner performance
A. Partners and partner roles enacted as prescribed or adapted for strategic reasons Review of project proposal
Partner interview
Partner survey
B. Partner assessment and satisfaction with their partners’ performance
† Compliance with expected and agreed roles Process observation and assessment
Partner interview
Partner survey
† Satisfaction of partners with each other’s performance Partner interview
† Partner performance beyond the call of duty (i.e. extra-role behavior) Partner survey
V. Efficiency and strategy Partner interview
† Identification of critical factors influencing partnership’s success
† Extent to which these are continuously monitored
† Extent to which these are strategically managed
Box 1
Pre-requisites and facilitative factors
† Tolerance for sharing power
† Willingness to adapt to meet partnership’s needs
† Receptivity to new solutions
† Flexibility in taking corrective action
† Accommodation of special requests
† Responsiveness to unforeseen situations
† Existence of champions
† Location
† Focus of advocacy
Box 2
Success factors
† Trust (character and competence) † Clear goals
† Confidence † Partner compatibility
† Senior management support † Conflict
† Ability to meet expectations
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 223
basis, confidence is based on rational expectations, typically
grounded in institutional arrangements, such as contracts,
regulations, and standard operating procedures (Luhmann,
1988). Specific partners may have a particular preference
for confidence over trust-based mechanisms. Presumably,
that preference may change with time and repeated
interaction among partners, as they accumulate experience
demonstrating partner dependability and trustworthiness
(Ostrom, 1990). In fact, Handy (1988) measures the level of
trust in a partnership by the inverse variable of level of
control, as indicated, for example, by reporting and approval
requirements (qtd. in Malena, 1995, 12).
Senior management support contributes to partnership
performance both directly and indirectly. Directly, such
support translates into resource commitments (e.g. financial,
personnel, etc.) and often entails flexibility and consequent
timesavings in terms of making adaptations to standard
procedures to accommodate partner preferences and con-
straints, or to maximize partnership performance. Indirectly,
the participation and support of senior management
symbolizes the organization’s commitment to the partner-
ship and its success, contributing to trust building among
partner organizations.
The ability of a partnership to meet performance
expectations can be examined at two levels. Individual
partner performance is discussed later. Ability to meet
performance expectations also refers to the existence of
constraints beyond the control of the partnership, which
inhibit its performance. These might include, for example,
legal or regulatory policies imposed by a funder or
government agency. Another constraint that should be
assessed and monitored is whether or not the partnership or
member organizations possess the necessary skills and
capacity. For example, sometimes partners are selected for
their relationship and legitimacy vis-à-vis key stakeholders,
but may lack organization capacity. This does not prohibit
partnership success, but it does identify areas for capacity
investment, and can significantly increase the complexity of
partnership implementation.
Clear goals are an important target of assessment both in
terms of outcome and process. With respect to the former, it
is important that all partners understand the partnership’s
goals (an indicator of partnership identity, discussed later)
and share a common vision for the partnership, and that
goals be clear so as to facilitate assessment. From a process
perspective it is important that the mission, vision, and goals
be mutually determined and agreed; this enhances the
likelihood of goal attainment and the partners’ commitment
(Leonard, 1998).
Partner compatibility also encompasses a range of
factors. The more partners know and understand of each
other’s mission, track record, operations, and constraints in
advance of the partnership, the less learning and trust
building has to occur in the context of implementation. The
evolution of this understanding is a key target of analysis.
The speed of understanding and trust building is mediated
by the partners’ previous experience. If the partners have
experienced conflict or confrontations in the past, it will
likely take much longer to become compatible partners.
Most importantly, partner compatibility implies that
partners do not fundamentally inhibit the organization
identity of themselves or their partners. For example, do
partners share core constituencies or at least not serve
conflicting ones? Are core values among the partners
contradictory? And if there are contradictions, can the
relationship be justified for a greater good that serves
the organizations’ missions? Are mechanisms in place
to guard against compromising identity due to these
incompatibilities?
Finally, conflict is an obvious target for assessment.
However, it is not as straightforward as might be presumed.
The absence of conflict may imply that mutual influence is
compromised or non-existent (see, for example, Brown &
Ashman, 1996). Lister (2000) draws upon Lukes’ (1974)
notion of power as ‘socially structured and culturally
patterned behavior’ (22) to demonstrate how power can be
exercised to shape the needs of others, influencing them to
pursue behavior in the interests of the power-holder. Thus,
consensus may imply a deeply ingrained power play.
Assessing the manifestation of such power plays is not
only highly subjective, it would be near impossible to
determine and would likely generate conflicting interpret-
ations. Still, it is a caveat worth noting in reviewing the
extent to which partners are maintaining their own identity
within the partnership. This is largely determined by the
existence of one or more dominating partners.
5.2. Measuring the degree of partnership
Partnership practice should be assessed on a relative
scale, because: desired goals and relationship preferences of
partners will vary; the ideal-type partnership may be
impossible to fully implement; and judgments of compli-
ance with this model are extremely subjective. The degree
of partnership can be assessed according to the presence of
its defining dimensions: mutuality and organization identity.
These dimensions are also contextually determined; specific
and meaningful indicators are best left to the partners to
determine. However, it is possible to recommend some
sample indicators as suggested by the literature and practice
to date.
5.2.1. Mutuality
Some of the most common indicators for mutuality (see
Box 3) include equality in decision making, resource
exchange, reciprocal (as opposed to hierarchical) account-
ability, transparency, and degree of partner representation
and participation in partnership activities. Equality in
decision-making is a challenge from the start, particularly
if there is a power imbalance among partners. Power
imbalances generally originate from one partner controlling
the majority of the resources. When this is the case, true
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
224
equality in decision-making can be skewed, whether
because the more powerful partner takes charge, or, more
subtly, because the less powerful partners defer to that
partner’s wishes so as not to jeopardize future resource
flows. This underscores the importance of resource
exchange. It is important to recognize that not all resources
are material. In other words, contributions can entail the
hard resources of money and materials, as well as important
soft resources, such as managerial and technical skills,
information, contacts, and credibility/legitimacy. Mutuality
implies mutual dependence among partners due to the
unique and indispensable contributions each of them makes.
With reciprocal accountability each partner takes
responsibility and is accountable to the others for its actions
and their potential impact on the partnership (Commins,
1997). Reciprocal accountability means that partners have
access to performance information of the overall partnership
and its individual partners on a regular basis and/or upon
request. Consequently, accountability is closely related to
transparency. Partners do not need to know everything about
each other, but in partnerships they should be open and
honest about areas of common concern or any information
that can potentially influence partnership effectiveness and
efficiency. Transparency is most commonly operationalized
as formal information exchange requirements and response
to specific information requests. Transparency can also be
less formal and/or structured, such as impromptu telephone
calls, e-mails, and conversations. Providing accurate and
timely information is both a professional duty and an
expression of respect (Peterson, 1997). This includes
making relevant information available in an accessible
manner, in the appropriate language, and with minimal use
of terminology specific to a particular professional culture
that excludes or is inconvenient to one or more partners.
Partnership should entail full participation of all member
partners, according to their comparative advantages and
agreed roles. This includes decision making, as above, as
well as participation in meetings, relevant discussions, and
program activities. Mutual respect is also a key component
of mutuality in partnership. Mutual respect rests on an
explicit recognition of the indispensability of each partner
and its contribution. Partners are aware of each of their
partner’s unique strengths and seek to effectively incorpor-
ate these into the partnership work. Mutual respect
presumes that all negotiation and agreements are made in
good faith, implying full disclosure of actor-specific
objectives. Mutual respect is manifested in the extent to
which each partner considers the implications of its actions
for the other partners. This includes the scheduling of
meetings, reporting requirements, and sensitivity to key
relationships and potential conflicts.
Finally, mutuality encompasses mutual benefit and
risk sharing: all partners share the risks and the glory of
their partnership work. This does not necessarily mean
that partners benefit equally. Absolute equality in this
sense would be extremely difficult to attain. Executives
and managers in the private sector acknowledge that
success deriving from alliances is based on a ‘relatively
even, but not equal exchange of benefits and resources’
(Whipple & Frankel, 2000, p. 21). Partners will need to
determine for themselves if they are satisfied with the
relative evenness of the benefits and costs of the
partnership work.
5.2.2. Organization identity
Partnership work inevitably entails adaptation. How-
ever, value-added is contingent upon each organization
balancing these adjustments with the maintenance of
their organization identity. In order to assess the
preservation of members’ organization identity within
the partnership, it is first necessary to determine
precisely what that identity is. This may entail a
process of self-awareness promotion, where partners
are asked to identify their mission, core constituencies,
underlying values, and organizational culture. In other
instances, partners may already be self-aware, implying
a stronger identity from the start. Key areas of
assessment with respect to organization identity main-
tenance include the degree of reciprocal adaptation for
the purpose of protecting organization identities while
maximizing their benefit to the partnership, maintenance
of service quality and responsiveness to partners’
constituencies, and maintenance of quality and focus
on partners’ comparative advantages (Box 4).
5.3. Outcomes of the partnership relationship
Relationship outcomes relate to the partnership’s value-
added. Value-added seeks to confirm and articulate that the
partnership as a whole yields more than what would have
resulted from the partner organizations operating indepen-
dently. Because each partnership is unique in its compo-
sition and programmatic goals, it is impossible to identify
specific cross cutting value-added indicators. Furthermore,
as partnerships are dynamic and many are experimental, it is
also difficult to specify value-added indicators a priori.
However, evidence of value-added, whether aspired or
identified after the fact, can be categorized as follows
(Box 5).
Value-added may include qualitative or quantitative
synergistic outcomes of the program itself (i.e. aspects of
program performance that relate to advantages beyond what
the actors could have independently produced), linkages
with other programs and actors, enhanced capacity and
Box 3
Degree of partnership: mutuality
† Mutuality and equality
(self-determined)
† Transparency
† Equality in decision making † Partner representation & participation
† Resource exchange † Mutual respect
† Reciprocal accountability † Even benefits
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 225
influence of individual partners, and other multiplier effects
such as program extensions and replication, new programs,
etc. These may be expected, hoped for, or unforeseen
outcomes of partnership work. For example, Brown and
Ashman (1996) identify the following potential multipliers
of inter-sectoral problem solving at the grassroots level: the
creation and strengthening of local organizations, expanded
activities and credibility of bridging NGOs, and the
establishment of norms of reciprocity, cooperation, and
trust among previously unrelated or antagonistic parties
(1477). Value-added is difficult to confirm since attribution
is problematic. Some indicators are more easily verified
than others. Hence, evaluating partnership value-added
is primarily (though not exclusively) perception- and
consensus-based, and is often closely related to partner
satisfaction.
Another element of the effectiveness and outcomes of the
partnership is the extent to which individual partners meet
their own objectives through the partnership. Since partner-
ship requires extra effort and is based on the will of the
partners to engage as partners, individual partner drivers
should be identified and their satisfaction assessed. This is
an indication that the partnership, or at least a particular
partner’s participation, will be sustainable. Partner drivers
will presumably include enhancing the organization’s
performance in pursuing its own mission and satisfying its
constituencies. The underlying theory is that if partnership
is appropriately established and effectively managed, it
should improve performance for all partners (Lambert,
Emmelhainz, & Gardner, 1996, p. 11).
Finally, a successful partnership relationship is one that
has developed its own partnership identity. This identity is
the glue that holds the partners together and forms the basis
for legitimacy and values identification of its major
stakeholders. Partnership identity entails an identifiable
organization culture, complete with processes and mechan-
isms reflective of the partnership’s underlying values; a
unique, and identifiable mission, with associated compara-
tive advantages and value-added; and a set of constituencies
that may go beyond the constituencies of individual partner
organizations.
5.4. Partner performance
Targets for partner performance are summarized in
Box 6.
Some aspects of partner performance can be assessed
objectively, by comparing whether or not the partnership
encompasses the partners and prescribed roles that were
anticipated, or if not, whether or not changes were made in
the service of overall objectives as a form of strategic
adaptation. In addition to noting whether or not partners
performed the roles prescribed (or subsequently agreed), it
is important to assess whether or not they did so effectively
and efficiently. Portions of such an assessment are
contextual. In assessing relationships, the most important
indicator of partner performance is the other partners’
satisfaction with that performance. In business alliances,
successful companies are twice as likely to assess their
partners’ performance than less successful companies
(Harbison & Pekar, 1998). Partner performance entails an
independent assessment of partner contributions in accord-
ance with program design and partner agreements, as well as
a mutual assessment among the partners of each partner’s
performance. Discrepancies among these assessments are an
important end in themselves, pointing to a need for better
information sharing and trust building. Partner performance
assessments should also note whether a partner acted above
and beyond the call of duty in promoting and performing
within the partnership.
5.5. Efficiency
Assessing the efficiency of the partnership relationship
Box 5
Outcomes of the partnership relationship
1. Value-added † Enhanced performance in pursuing own mission
† Qualitative and quantitative synergistic program outcomes † Enhanced performance in satisfying constituencies
† Linkages with other programs and actors 3. Partnership identity
† Enhanced capacity & influence † Partnership organization culture
† Other multiplier effects † Values
2. Partners meet own objectives † Partnership mission & value-added
† Satisfaction in meeting identified drivers † Name recognition
† Evidence of meeting drivers † Partnership constituencies
Box 4
Degree of partnership: organization identity
† Determining partner organization identities † Extent and quality of organizational change
† Influence on partners’ service quality and responsiveness to core constituencies
† Perception of threats or compromises † Influence on and use of core constituencies
† Nature of organization adaptations/adjustments † Perceptions of mutual adaptation
† Perception of partners’ adjustments in response to expressed concern † Perceptions of overall impact on identity
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
226
implies indicators for monitoring, maintaining, and improv-
ing the partnership and its contribution to effectiveness and
impact. These relate to the broader subject of strategic
management. All organizations and programs face varying
degrees of environmental hostility, both internally and
externally. This aspect of partnership assessment addresses
the extent to which there is environmental hostility toward
the partnership program and approach, and the extent to
which this hostility is proactively managed (Box 7). That is,
is the partnership strategically managed such that all
opportunities to reduce operating costs, such as those
derived from environmental hostility, are pursued?
Relevant components of environmental hostility will
vary from partnership to partnership. In some instances it
may be inefficient to invest resources in influencing
particular environmental factors. For this reason, partner-
ship leaders should first determine which of these hostile
factors are critical, and, of these, which of them can be
appreciated, influenced, or controlled within a reasonable
cost. In some instances managing these critical factors may
entail no or only marginal additional costs. For example,
leaders should ensure that incentives and drivers for
partnership champions and partnership organizations are
always clearly identified and emphasize how the partnership
responds to these. Such efforts can be pursued, for example,
in the context of day-to-day management and communi-
cations. Sample factors determining environmental hostility
include:
presence or potential of partnership champions;
existence, effectiveness, and efficiency of institutional
linkages among partners;
capacity, commitment, strong organization identity, and
compatibility of partner organizations;
extent to which there is a ready demand for partnership
products and services;
homogeneity and degree of organization among partner-
ship stakeholders and constituents;
degree to which legal frameworks are facilitative or
inhibiting; and
stability of the partnership’s internal and external
environments (Brinkerhoff, 2002b).
6. Proposed assessment methodology
The proposed assessment methodology conforms to the
critical friend and developmental models described above.
It is also multi-faceted. There are three primary methods
proposed. A summary of the application of each to the
assessment targets appears in Table 1. Sequencing of these
methodologies is important to their iterative contributions.
The partner survey will encourage participants to begin to
reflect on the issues. The survey and an initial baseline
assessment and process observation will inform the partner
interviews to follow. Subsequent applications of the three
methodologies will build upon the results of each. Process
observation will be continuous, with scheduled reporting
and joint analysis determined by the participants. Based on
these results, participants can negotiate the need for
subsequent interviews and surveys. An application and
analysis of all three methods would be expected at key
moments in the life of the program, such as the mid-point
and closing, or around critical events.
6.1. Process observation and assessment
This method will include a review of project documen-
tation and reports, observation of management meetings and
program activities, and analysis of all data, including those
collected by the methods later. The assessment aspect of this
method will include an initial summary and analysis, to be
supplemented by interactive feedback and interpretation
sessions with partnership actors collectively. Such sessions
will more directly address the developmental aspects of the
assessment, in the service of improvements and learning.
6.2. Partner survey
A survey will be administered to partner organizations
and staff. The survey will consist primarily of closed-ended
questions. Many of these will address quantitative ordinal
scales. The partner survey can later be adapted, as needed
and appropriate, and re-administered periodically through-
out the lifetime of the partnership or for the determined
length of the assessment process. Subsequent surveys will
be informed by the partner interview, agreed indicators, and
the results of process observation and assessment. In
particular, follow-up questions for baseline data and
subsequent monitoring will be developed and incorporated
into later surveys.
6.3. Partner interviews
Representatives of each partner organization will be
interviewed to determine baseline information and potential
indicators for definitions of mutuality and equality, partner
organization identity and its maintenance within the
partnership, partnership value-added, partner objectives,
partnership identity, partner performance, and efficiency
Box 7
Efficiency
† Identification of critical factors influencing partnership’s success
(self-determined)
† Extent to which these are continuously monitored
† Extent to which these are strategically managed
Box 6
Partner performance
† Partner roles enacted as prescribed or adapted for strategic reasons
† Compliance with expected & agreed roles
† Satisfaction with partners’ performance
† Partner performance beyond the call of duty (i.e. extra-role behavior)
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 227
and strategic management of the partnership. The interviews
will be semi-structured, with a combination of closed- and
open-ended questions. Initial interviews will be informed by
the results of the first partner survey. Appropriate repre-
sentatives will be determined by the partner organizations
themselves and may entail one or more individuals as
appropriate.
Feedback on potential indicators will form the basis for
applying an adaptation of the Delphi technique in order to
specify indicators agreeable to all. The Delphi technique is a
“group process technique for eliciting, collating, and
generally directing informed (expert) judgment towards a
consensus on a particular topic” (Delp, Thesen, Motiwalla,
& Seshadri, 1977, p. 168). It typically consists of
anonymous input on a range of issues for which consensus
is sought. Several rounds of input and feedback, typically
through mail, are conducted, with data collected, collated,
and analyzed to inform subsequent rounds until consensus
emerges (or disagreement is highlighted). The proposed
Delphi adaptation will entail initial data collection through
partner interviews, with subsequent e-mail ranking and
feedback, potentially to a broader group than those initially
interviewed. Final presentation and agreement on suggested
indicators will occur during a face-to-face, interactive
meeting.
7. Review and next steps
The proposed partnership relationship framework
addresses the evaluation challenges of integrating process
and institutional arrangements into performance measure-
ment systems, thus contributing to relationship performance
as well as program outcomes. It also potentially enhances
the theory and practice of partnership.
The developmental model and critical friend approach
address the challenge of identifying, articulating, and
measuring processes and institutional arrangements by: (1)
maintaining a continuous assessment presence—whether
directly by the assessor or indirectly by partnership
members who have been sensitized to measurement,
assessment, learning, and associated agreed targets of
analysis; and (2) through dialogue to ensure shared under-
standing, and to create new knowledge. Within this
framework, relevant indicators, both qualitative and quan-
titative, can be jointly developed and measured through both
intensive, open-ended interviews and standardized ques-
tionnaires. Feedback and assessment sessions allow for
periodic adjustments in targets of analysis and the
particulars of the program theory regarding partnership’s
contributions to performance.
These sessions enable adjustments to processes and
behaviors that can improve the relationship and project
performance. The mutual understanding and trust building
that can emerge from such an assessment process can also
lead to the identification of additional objectives and
opportunities within the partnership. Alternatively, such a
process may provide a mechanism for actors to readily
gauge their satisfaction (or dissatisfaction) with the
relationship and prospects for improved relationships and
practice, potentially leading to a decision to terminate the
relationship. That is, as the partnership as a whole is
assessed, each partner will also have a framework in which
to conduct their own cost-benefit analysis of the partnership
work, whether formally or informally.
The framework also begins to address the problem of
attribution by maximizing the partnership’s compliance
with the definitional dimensions of partnership, with
specific characteristics of the relationship determined by
the actors themselves. This enables the assessor to more
accurately determine if any program failures or inefficien-
cies are due to the program implementation itself or
inadequacies in the partnership relationship. Attribution
cannot be definitively determined. However, such a process
can reduce some of the noise inherent to attribution
challenges, and can be used to further refine our theory
regarding partnership’s contribution to performance. As
knowledge emerges about how relationship attributes are
enhancing performance, these attributes can be more
specifically targeted and enhanced.
By combining standardized assessment targets with self-
determined indicators and interpretation, the proposed
framework is well positioned to follow the DOSA model.
Comparative data from a number of partnership experiences
could conceivably contribute to a benchmarking effort,
which could inform partnership actors beyond those
participating, and potentially contribute to a wider appli-
cation of the proposed assessment framework, more
attention to relationship outcomes, and partnership per-
formance. Partnership practitioners would benefit from
exploring how other partnerships have specified and
measured partnership performance indicators. Benchmark-
ing could also assist actors to manage expectations as
partnerships evolve. Comparative data analysis might serve
to identify cycles of performance, common challenges, and
best practices.
The reality remains that without some enlightenment of
program and organizational leaders and managers, funders,
constituents, and perhaps the general public, process
evaluation and the assessment of institutional arrangements
are not likely to be mainstreamed as an essential component
of performance management and evaluation. However,
efforts such a the one described in this paper are likely to
move this agenda forward in terms of ideas, conceptual
frameworks, and the how tos of such assessments, and
hopefully in promoting an accumulation of experience with
these.
In particular, it is hoped that this framework and its
subsequent application will shed light on our understanding
of partnership and its effectiveness as an institutional
arrangement for getting results. The partnership rhetoric is
strong; the practice has been relatively weak. Frameworks
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
228
such as the one prescribed here can promote: (1) a more
refined understanding of partnership in general and how it
differs from other institutional arrangements; and (2) a more
practical determination of what partnership can mean in the
context of particular programs and relationships, i.e. as
determined and jointly agreed by members. Furthermore,
such frameworks can contribute substantially to the
identification and measurement of partnership attributes,
such as mutuality and identity, and partnership value-added.
The private sector provides a salient lesson on this
point. The importance of intangible assets has increased
substantially (more than 80% of companies’ book value by
the end of the 20th century). While managers recognized
this evolution, until more sophisticated performance
measurement frameworks were developed, “They could
not manage what they could not describe or measure”
(Kaplan & Norton, 2001, p. 88). Similarly, the inability to
articulate features of partnership and its contribution to
performance has heretofore discouraged its effectiveness
and the investments necessary to attain its value-added. The
application of the proposed partnership relationship assess-
ment framework will assist future efforts to design and
implement effective partnership relationships, as well as
promote partnership practice and the maximization of its
contribution to outcomes.
8. Lessons learned
This framework was originally developed for application
to a Federally funded consortium of non-profits and private
consulting firms. The consortium management committee
rejected the framework as proposed, with claims, among
others, that it was insufficiently specific. The framework’s
introduction and subsequent developments yield three
important lessons for evaluating partnership relationships.
First, people continue to be uncomfortable with address-
ing issues of trust and other relationship dynamics.
Participants emphasized the need to focus on indicators of
program performance. For the most part, such reactions can
represent discomfort with the new or different, pointing to a
need for the assessor to clearly explain the framework in a
face-to-face setting, which in this case was not possible.
Another complaint was that participants would be uncom-
fortable addressing direct questions about their perceptions
and feelings vis-à-vis partners’ trustworthiness and compe-
tence. Such discomfort may be unavoidable. However, two
possible responses are to: (1) introduce these issues in non-
threatening, positively framed language, and (2) stress the
assumption that the assessor and participants are trust-
worthy, interested in partnership performance, and will
avoid blaming behavior.
Second, the resistance and ultimate rejection of the
framework confirm the need for champions for such efforts.
In this instance, the original champion of relationship
assessment convinced consortium members, as well as the
contracting federal agency of the merit of such an approach,
both for improving the performance of this program, and for
identifying lessons for subsequent contracted programs. The
assessment was thus written into the approved program
proposal and resulting contract. Subsequently, the assess-
ment was left without a champion and without a shared
understanding of the merit of the exercise. The original
champion’s home organization withdrew from the program
early on due to changes in the policy environment, and
turnover among the federal oversight staff eliminated the
institutional memory supportive of the initial decision.
Consequently, when discomfort emerged, remaining par-
ticipants were quick to reject the proposal.
A replacement assessor, with minimal evaluation
experience, was hired part-time. This hiring demonstrates
a substantially reduced commitment to evaluation in general
and implies reluctance to follow-through with the original
and agreed plan. Furthermore, in an exit interview, I was
told there was already dissatisfaction with the efforts of the
new evaluator (after only 1 month). It became obvious that
the participants were unclear about what they wanted and
what was required of them. Since the contracting agency
had not pressed them on the issue, they seemed to be
searching for ways to meet the minimum requirements of
the agreed program proposal. At this point, a champion
could assist the consortium management, as well as the
contracting agency representative, to at least agree upon the
objectives and a potential revision of the assessment
requirement.
During the exit interview, a third lesson emerged. My lack
of technical sectoral expertise (e.g. health, environment,
agriculture) and its absence in the framework was repeatedly
emphasized. I had raised this from the beginning during the
hiring process; it did not seem to be of concern at the time,
and was not a part of my scope of work. The dissatisfaction
with my lack of expertise could be interpreted as a ploy to
reject the framework and end the initiative. However, sectoral
expertise can be useful. For example, in process observation,
without substantive program expertise it could be difficult to
determine if ideas are rejected based on lack of technical
merit, or if rejection reflects a power dynamic among
consortium members. Technical expertise can be accessed
through the critical friend approach, where the assessor works
closely with participants in interpreting data. The assessor
should alert participants to the need to work with each other
and with the assessor to ensure proper interpretation of events
and indicators.
These challenges, lessons, and recommendations have
been confronted in similarly complex evaluation efforts, such
as comprehensive community initiatives. In particular, Brown
(1995) outlines similar political and resistance dynamics and
confirms evaluators’ increasing need for three skills: pedago-
gical, political, and trust building. Programs increasingly
operate at multiple levels with diverse stakeholders, whose
relations cannot be divorced from program performance. In
order to capture these dynamics, Connell and Kubisch (1999)
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 229
propose a theory of change evaluation approach, which, when
combined with the aforementioned skills, may assist evalua-
tors to make progress in their design and application of
learning-based evaluation approaches that would provide a
better understanding of the role of multi-party relationships in
improving program performance.
References
Albert, S., & Whetten, D. A. (1985). Organization identity. In L. L.
Cummings, & B. M. Staw (Eds.), Research in organizational behavior,
(Vol. 7) (pp. 263–295). Greenwich, CT: JAI Press.
Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in
practice: What do we learn? Evaluation Review, 24(4), 407–431.
Brinkerhoff, J. M. (2002a). Government-NGO partnership: A defining
framework. Public Administration and Development, 22(1).
Brinkerhoff, J. M. (2002b). Partnerships for international development:
Rhetoric or reality. Boulder, CO: Lynne Rienner Publishers.
Brown, P. (1995). The role of the evaluator in comprehensive community
initiatives. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss
(Eds.), New approaches to evaluating community initiatives. Volume 1:
Concepts, methods, and contexts, Washington, DC: The Aspen
Institute. http://www.aspenroundtable.org/vol1/index.htm, Cited
February 17, 2002..
Brown, D. L., & Ashman, D. (1996). Participation, social capital, and
intersectoral problem-solving: African and Asian cases. World
Development, 24(9), 1467–1479.
Charles, C., & McNulty, S. (1999). Partnering for results: Assessing the
impact of inter-sectoal partnering. Washington, DC: Agency for
International Development.
CIVICUS (the World Alliance for Citizen Participation) (2001). CIVICUS
index on civil society project, http://www.civicus.org/index/
diamondhome.html.
Commins, S. (1997). World vision international and donors: Too close for
comfort? In D. Hulme, & M. Edwards (Eds.), NGOs, states and donors:
Too close for comfort? New York: St Martin’s Press in association with
Save the Children Fund.
Connell, J. P., & Kubisch, A. C. (1999). Applying a theory of change
approach to the evaluation of comprehensive community initiatives:
Progress, prospects, and problems. In K. Fullbright-Anderson, A. C.
Kubisch, & J. P. Connell (Eds.), Theory, measurement, and analysis,
(Vol. 2). Washington, DC: Aspen Institute. http://www.
aspenroundtable.org/vol2/index.htm Cited February 17, 2002..
Delp, P., Thesen, A., Motiwalla, J., & Seshadri, N (1977). System tools for
project planning. Bloomington, IN: Program of Advanced Studies in
Institutional Building and Technical Assistance Methodology, Inter-
national Development Institute, Indiana University.
Dobbs, J. H. (1999). Competition’s new battleground: The integrated value
chain. Cambridge, MA: Cambridge Technology Partners.
Drucker, P. (1990). Managing the nonprofit organization: Principles and
practices. New York: Harper Collins.
Dym, B., & Jacobs, F. (1998). Taking charge of evaluation. The Nonprofit
Quarterly, 5(3).
Edwards, M. (1996). Too close for comfort? The impact of official aid on
nongovernmental organizations. World Development, 24(6), 961–973.
Ellinger, A. E., Keller, S. B., & Ellinger, A. D. (2000). Developing
interdepartmental integration: An evaluation of three strategic
approaches for performance improvement. Performance Improvement
Quarterly, 13(3), 41–59.
Fowler, A. (1997). Striking a balance: A guide to enhancing the
effectiveness of NGOs, in International Development. London: Earth-
scan Publications.
Funnell, S. C. (2000). Developing and using a program theory matrix for
program evaluation and performance monitoring. New Directions for
Evaluation, 87, 91–101.
Gabarro, J. J. (1987). The development of working relationships. In J. W.
Lorsch (Ed.), Handbook of organizational behavior. Englewood Cliffs,
NJ: Prentice Hall.
Gioia, D. A., Schultz, M., & Korley, K. G. (2000). Organizational identity,
image and adaptive instability. Academy of Management Review, 25(2),
65–81.
Greene, J. C. (1990). Three views on the nature and role of knowledge in
social science. In E. G. Guba (Ed.), Paradigm dialog. Thousand Oaks,
CA: Sage.
Handy, C. (1988). Understanding voluntary organisations. London:
Penguin Books.
Harbison, J. R., & Pekar, Jr P. (1998). Institutionalizing alliance skills:
Secrets of repeatable success. Strategy and Business, Second Quarter.
Huebner, T. A. (2000). Theory-based evaluation: Gaining a shared
understanding between school staff and evaluators. New Directions
for Evaluation, 87, 79–89.
Hulme, M., & Edwards, D. (Ed.), (1997). NGOs, states and donors: Too
close for comfort? New York: St. Martin’s Press in association with
Save the Children.
Inter-American Foundation (IAF) (1999). The grassroots development
framework: Project objectives, baseline data, and results report.
Arlington, VA: Inter-American Foundation.
Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard: Measures
that drive performance. Harvard Business Review, January–February,
71–79.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating
strategy into action. Boston, MA: Harvard Business School Publishing.
Kaplan, R. S., & Norton, D. P. (2001). Transforming the balanced scorecard
from performance measurement to strategic management: Part I.
Accounting Horizons, 15(1), 87–104.
Kellner, P., & Thackray, R. (1999). A philosophy for a fallible world. The
New Statesman, 12(547), R22–R25.
Lambert, D. M., Emmelhainz, M. A., & Gardner, J. T. (1996). Developing
and implementing supply chain partnerships. The International Journal
of Logistics Management, 7(2), 1–17.
Leonard, L. G. (1998). Primary health care and partnerships: Collaboration
of a community agency, health department, and university nursing
program. Journal of Nursing Education, 37(3), 144–151.
Levinger, B., & Bloom, E (1997). Discussion-oriented organizational self-
assessment. The Education Development Center and Pact, with
assistance from the Office of Private and Voluntary Cooperation, US
Agency for International Development.
Lipsky, M., & Smith, S. R. (1989–1990). Nonprofit organizations,
government, and the welfare state. Political Science Quarterly,
104(4), 625–648.
Lister, S. (2000). Power in partnership? An analysis of an NGO’s
relationships with its partners. Journal of International Development,
12(2), 227–239.
Luhmann, N. (1988). Familiarity, confidence, trust: Problems and
perspectives. In D. Gambetta (Ed.), Trust: The making and breaking
of cooperative relations. Oxford: Basil Blackwell.
Lukes, S. (1974). Power: A radical view. London: Macmillan Press.
Malena, C. (1995). Relations between northern and southern non-
governmental development organizations. Canadian Journal of
Development Studies, 16(9), 7–29.
Murphy, D. F., & Bendell, J. (1997). In the company of partners: Business,
environmental groups and sustainable development post-Rio. England:
Policy Press.
Norwegian Agency for Development Cooperation (NORAD) (1989). Some
planning and evaluation strategies. Guide to planning and evaluating
NGO projects. Part I: Principles and policies of development assistance.
Oslo: Author.
Ostrom, E. (1990). Governing the commons: The evolution of institutions
for collective action. Cambridge: Cambridge University Press.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231
230
Patton, M. Q. (1997). Utilization-focused evaluation: The new century text
(3rd ed). Thousand Oaks, CA: Sage Publications.
Peterson, D. J (1997). The NGO/donor workshop: Highlights of the
discussion. In E. Klose & I. Hunt (Ed.), NGO/Donor workshop,
Szentendre, May 12–14, 1997: A summary report. Szentendre,
Hungary: ISAR: Clearinghouse on Grassroots Cooperation in Eurasia
and the Regional Environmental Center for Central and Eastern Europe
in collaboration with ECOLOGIA and the Environmental Partnership
for Central Europe, with support from the US Agency for International
Development; the Environmental Ministries of Austria, Finland, and the
Netherlands; and the World Bank.
Poulin, M. E., Harris, P. W., & Jones, P. R. (2000). The significance of
definitions of success in program evaluation. Evaluation Review, 24(5),
516–536.
Provan, K. G., & Milward, J. B. (2001). Do networks really work? A
framework for evaluating public-sector organizational networks. Public
Administration Review, 61(4), 414–423.
Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as
critical friend. New Directions for Evaluation, 86, 81–92.
Rendon, L. I., Gans, W. L., & Calleroz, M. D. (1998). No pain, no gain: The
learning curve in assessing collaboratives. New Directions for
Community Colleges, 103, 71–83.
Schonberger, R. J. (1996). Backing off from the bottom line. Executive
Excellence, May, 16–17.
Shah, J., & Singh, N. (2001). Benchmarking internal supply-chain
performance: Development of a framework. Journal of Supply Chain
Management, 37(1), 37–47.
Squire, L. (1995). Evaluating the effectiveness of poverty alleviation
programs. New Directions for Evaluation, 67, 27–37.
Tonkiss, F., & Passey, A. (1999). Trust, confidence and voluntary
organisations: Between values and institutions. Sociology, 33(2),
257–274.
Whipple, J. M., & Frankel, R. (2000). Strategic alliance success factors.
Journal of Supply Chain Management, 36(3), 21–28.
J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 231

More Related Content

Similar to Assessing and improving partnership relationships and outcomes a proposed framework.pdf

The role of Monitoring and Evaluation in Improving Public Policies – Challeng...
The role of Monitoring and Evaluation in Improving Public Policies – Challeng...The role of Monitoring and Evaluation in Improving Public Policies – Challeng...
The role of Monitoring and Evaluation in Improving Public Policies – Challeng...UNDP Policy Centre
 
Results-based-Management.pdf
Results-based-Management.pdfResults-based-Management.pdf
Results-based-Management.pdfpophius
 
Do evaluations enhance organisational effectiveness?
Do evaluations enhance organisational effectiveness?Do evaluations enhance organisational effectiveness?
Do evaluations enhance organisational effectiveness?Innocent Karugota Muhumuza
 
Paper on-balance-scorecard1
Paper on-balance-scorecard1Paper on-balance-scorecard1
Paper on-balance-scorecard1Ijcem Journal
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATLenny Hidayat
 
Data1. From the Table below prepare the following Financial Statem.docx
Data1. From the Table below prepare the following Financial Statem.docxData1. From the Table below prepare the following Financial Statem.docx
Data1. From the Table below prepare the following Financial Statem.docxwhittemorelucilla
 
Measuring Impact and Return on Investment of Corporate Social Investment and ...
Measuring Impact and Return on Investment of Corporate Social Investment and ...Measuring Impact and Return on Investment of Corporate Social Investment and ...
Measuring Impact and Return on Investment of Corporate Social Investment and ...Next Generation Consultants: Reana Rossouw
 
Running head IMPROVEMENT OPPORTUNITY .docx
Running head IMPROVEMENT OPPORTUNITY                             .docxRunning head IMPROVEMENT OPPORTUNITY                             .docx
Running head IMPROVEMENT OPPORTUNITY .docxwlynn1
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008euweben01
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008euwebsc01
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008euwebtc01
 
Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Innocent Karugota Muhumuza
 
Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Innocent Karugota Muhumuza
 
Evaluation of health programs
Evaluation of health programsEvaluation of health programs
Evaluation of health programsnium
 
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptxDefinations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptxInayatUllah780749
 
The field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxThe field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxcherry686017
 
Importance of Monitoring and Evaluation to Decentralization
Importance of Monitoring and Evaluation to DecentralizationImportance of Monitoring and Evaluation to Decentralization
Importance of Monitoring and Evaluation to DecentralizationIssam Yousif 2000+
 

Similar to Assessing and improving partnership relationships and outcomes a proposed framework.pdf (20)

The role of Monitoring and Evaluation in Improving Public Policies – Challeng...
The role of Monitoring and Evaluation in Improving Public Policies – Challeng...The role of Monitoring and Evaluation in Improving Public Policies – Challeng...
The role of Monitoring and Evaluation in Improving Public Policies – Challeng...
 
Results-based-Management.pdf
Results-based-Management.pdfResults-based-Management.pdf
Results-based-Management.pdf
 
065 0791
065 0791065 0791
065 0791
 
Do evaluations enhance organisational effectiveness?
Do evaluations enhance organisational effectiveness?Do evaluations enhance organisational effectiveness?
Do evaluations enhance organisational effectiveness?
 
Paper on-balance-scorecard1
Paper on-balance-scorecard1Paper on-balance-scorecard1
Paper on-balance-scorecard1
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYAT
 
Data1. From the Table below prepare the following Financial Statem.docx
Data1. From the Table below prepare the following Financial Statem.docxData1. From the Table below prepare the following Financial Statem.docx
Data1. From the Table below prepare the following Financial Statem.docx
 
Measuring Impact and Return on Investment of Corporate Social Investment and ...
Measuring Impact and Return on Investment of Corporate Social Investment and ...Measuring Impact and Return on Investment of Corporate Social Investment and ...
Measuring Impact and Return on Investment of Corporate Social Investment and ...
 
Running head IMPROVEMENT OPPORTUNITY .docx
Running head IMPROVEMENT OPPORTUNITY                             .docxRunning head IMPROVEMENT OPPORTUNITY                             .docx
Running head IMPROVEMENT OPPORTUNITY .docx
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008
 
Pfm Measure 2008
Pfm Measure 2008Pfm Measure 2008
Pfm Measure 2008
 
Building Common Outcome Framework
Building Common Outcome FrameworkBuilding Common Outcome Framework
Building Common Outcome Framework
 
Urban Institute & CWW- Nonprofit Performance Indicators
Urban Institute & CWW- Nonprofit Performance IndicatorsUrban Institute & CWW- Nonprofit Performance Indicators
Urban Institute & CWW- Nonprofit Performance Indicators
 
Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness
 
Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness Do evaluations improve organisational effectiveness
Do evaluations improve organisational effectiveness
 
Evaluation of health programs
Evaluation of health programsEvaluation of health programs
Evaluation of health programs
 
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptxDefinations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
 
The field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxThe field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docx
 
Importance of Monitoring and Evaluation to Decentralization
Importance of Monitoring and Evaluation to DecentralizationImportance of Monitoring and Evaluation to Decentralization
Importance of Monitoring and Evaluation to Decentralization
 

More from Emily Smith

Purdue OWL Annotated Bibliographies Essay Writi
Purdue OWL Annotated Bibliographies Essay WritiPurdue OWL Annotated Bibliographies Essay Writi
Purdue OWL Annotated Bibliographies Essay WritiEmily Smith
 
How To Sight A Quote In Mla Ma
How To Sight A Quote In Mla MaHow To Sight A Quote In Mla Ma
How To Sight A Quote In Mla MaEmily Smith
 
Top 3 Best Research Paper Writing Services Online - The Je
Top 3 Best Research Paper Writing Services Online - The JeTop 3 Best Research Paper Writing Services Online - The Je
Top 3 Best Research Paper Writing Services Online - The JeEmily Smith
 
011 How To Start Summary Essay Best Photos Of Res
011 How To Start Summary Essay Best Photos Of Res011 How To Start Summary Essay Best Photos Of Res
011 How To Start Summary Essay Best Photos Of ResEmily Smith
 
Some Snowman Fun Kindergarten Writing Paper
Some Snowman Fun Kindergarten Writing PaperSome Snowman Fun Kindergarten Writing Paper
Some Snowman Fun Kindergarten Writing PaperEmily Smith
 
Types Of Essay Writing With Examples Telegraph
Types Of Essay Writing With Examples TelegraphTypes Of Essay Writing With Examples Telegraph
Types Of Essay Writing With Examples TelegraphEmily Smith
 
Solar System And Planets Unit Plus FLIP Book (1St, 2
Solar System And Planets Unit Plus FLIP Book (1St, 2Solar System And Planets Unit Plus FLIP Book (1St, 2
Solar System And Planets Unit Plus FLIP Book (1St, 2Emily Smith
 
Essay Essaytips Essay University
Essay Essaytips Essay UniversityEssay Essaytips Essay University
Essay Essaytips Essay UniversityEmily Smith
 
Acknowledgement Samples 06 Acknowledgement Ack
Acknowledgement Samples 06 Acknowledgement AckAcknowledgement Samples 06 Acknowledgement Ack
Acknowledgement Samples 06 Acknowledgement AckEmily Smith
 
A Sample Position Paper - PHDessay.Com
A Sample Position Paper - PHDessay.ComA Sample Position Paper - PHDessay.Com
A Sample Position Paper - PHDessay.ComEmily Smith
 
PPT - About Our University Essay Writing Services PowerPoint
PPT - About Our University Essay Writing Services PowerPointPPT - About Our University Essay Writing Services PowerPoint
PPT - About Our University Essay Writing Services PowerPointEmily Smith
 
5 Qualities Of A Professional Essay Writer
5 Qualities Of A Professional Essay Writer5 Qualities Of A Professional Essay Writer
5 Qualities Of A Professional Essay WriterEmily Smith
 
College Essay Princeton Acce
College Essay Princeton AcceCollege Essay Princeton Acce
College Essay Princeton AcceEmily Smith
 
How To Write An Interesting Essay 2. Include Fascinating Details
How To Write An Interesting Essay 2. Include Fascinating DetailsHow To Write An Interesting Essay 2. Include Fascinating Details
How To Write An Interesting Essay 2. Include Fascinating DetailsEmily Smith
 
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board GameLetter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board GameEmily Smith
 
Really Good College Essays Scholarship Essay Exampl
Really Good College Essays Scholarship Essay ExamplReally Good College Essays Scholarship Essay Exampl
Really Good College Essays Scholarship Essay ExamplEmily Smith
 
Poetry On UCF Diversity Initiatives Website By UCF
Poetry On UCF Diversity Initiatives Website By UCFPoetry On UCF Diversity Initiatives Website By UCF
Poetry On UCF Diversity Initiatives Website By UCFEmily Smith
 
Research Proposal On Childhood Obesity. Childho
Research Proposal On Childhood Obesity. ChildhoResearch Proposal On Childhood Obesity. Childho
Research Proposal On Childhood Obesity. ChildhoEmily Smith
 
270 Amazing Satirical Essay Topics To Deal With
270 Amazing Satirical Essay Topics To Deal With270 Amazing Satirical Essay Topics To Deal With
270 Amazing Satirical Essay Topics To Deal WithEmily Smith
 
How To Write A Good Philosophy Paper. How To
How To Write A Good Philosophy Paper. How ToHow To Write A Good Philosophy Paper. How To
How To Write A Good Philosophy Paper. How ToEmily Smith
 

More from Emily Smith (20)

Purdue OWL Annotated Bibliographies Essay Writi
Purdue OWL Annotated Bibliographies Essay WritiPurdue OWL Annotated Bibliographies Essay Writi
Purdue OWL Annotated Bibliographies Essay Writi
 
How To Sight A Quote In Mla Ma
How To Sight A Quote In Mla MaHow To Sight A Quote In Mla Ma
How To Sight A Quote In Mla Ma
 
Top 3 Best Research Paper Writing Services Online - The Je
Top 3 Best Research Paper Writing Services Online - The JeTop 3 Best Research Paper Writing Services Online - The Je
Top 3 Best Research Paper Writing Services Online - The Je
 
011 How To Start Summary Essay Best Photos Of Res
011 How To Start Summary Essay Best Photos Of Res011 How To Start Summary Essay Best Photos Of Res
011 How To Start Summary Essay Best Photos Of Res
 
Some Snowman Fun Kindergarten Writing Paper
Some Snowman Fun Kindergarten Writing PaperSome Snowman Fun Kindergarten Writing Paper
Some Snowman Fun Kindergarten Writing Paper
 
Types Of Essay Writing With Examples Telegraph
Types Of Essay Writing With Examples TelegraphTypes Of Essay Writing With Examples Telegraph
Types Of Essay Writing With Examples Telegraph
 
Solar System And Planets Unit Plus FLIP Book (1St, 2
Solar System And Planets Unit Plus FLIP Book (1St, 2Solar System And Planets Unit Plus FLIP Book (1St, 2
Solar System And Planets Unit Plus FLIP Book (1St, 2
 
Essay Essaytips Essay University
Essay Essaytips Essay UniversityEssay Essaytips Essay University
Essay Essaytips Essay University
 
Acknowledgement Samples 06 Acknowledgement Ack
Acknowledgement Samples 06 Acknowledgement AckAcknowledgement Samples 06 Acknowledgement Ack
Acknowledgement Samples 06 Acknowledgement Ack
 
A Sample Position Paper - PHDessay.Com
A Sample Position Paper - PHDessay.ComA Sample Position Paper - PHDessay.Com
A Sample Position Paper - PHDessay.Com
 
PPT - About Our University Essay Writing Services PowerPoint
PPT - About Our University Essay Writing Services PowerPointPPT - About Our University Essay Writing Services PowerPoint
PPT - About Our University Essay Writing Services PowerPoint
 
5 Qualities Of A Professional Essay Writer
5 Qualities Of A Professional Essay Writer5 Qualities Of A Professional Essay Writer
5 Qualities Of A Professional Essay Writer
 
College Essay Princeton Acce
College Essay Princeton AcceCollege Essay Princeton Acce
College Essay Princeton Acce
 
How To Write An Interesting Essay 2. Include Fascinating Details
How To Write An Interesting Essay 2. Include Fascinating DetailsHow To Write An Interesting Essay 2. Include Fascinating Details
How To Write An Interesting Essay 2. Include Fascinating Details
 
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board GameLetter Paper Envelope Airmail Writing PNG, Clipart, Board Game
Letter Paper Envelope Airmail Writing PNG, Clipart, Board Game
 
Really Good College Essays Scholarship Essay Exampl
Really Good College Essays Scholarship Essay ExamplReally Good College Essays Scholarship Essay Exampl
Really Good College Essays Scholarship Essay Exampl
 
Poetry On UCF Diversity Initiatives Website By UCF
Poetry On UCF Diversity Initiatives Website By UCFPoetry On UCF Diversity Initiatives Website By UCF
Poetry On UCF Diversity Initiatives Website By UCF
 
Research Proposal On Childhood Obesity. Childho
Research Proposal On Childhood Obesity. ChildhoResearch Proposal On Childhood Obesity. Childho
Research Proposal On Childhood Obesity. Childho
 
270 Amazing Satirical Essay Topics To Deal With
270 Amazing Satirical Essay Topics To Deal With270 Amazing Satirical Essay Topics To Deal With
270 Amazing Satirical Essay Topics To Deal With
 
How To Write A Good Philosophy Paper. How To
How To Write A Good Philosophy Paper. How ToHow To Write A Good Philosophy Paper. How To
How To Write A Good Philosophy Paper. How To
 

Recently uploaded

Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 

Recently uploaded (20)

Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 

Assessing and improving partnership relationships and outcomes a proposed framework.pdf

  • 1. Assessing and improving partnership relationships and outcomes: a proposed framework Jennifer M. Brinkerhoff* Department of Public Administration, The George Washington University, Washington, DC 20052, USA Abstract To date, no evaluation frameworks are specifically targeted at evaluating partnership relationships, as opposed to partnership programmatic outcomes. Following a discussion and definition of partnership, its defining features, and value-added, the paper proposes a framework for assessing partnership relationships in order to: (1) improve partnership practice in progress, (2) refine and test hypotheses regarding partnership’s contribution to performance and outcomes, and (3) suggest lessons for future partnership work. The proposed assessment approach is continuous, process-oriented and participatory, and developmental. Targets of assessment include compliance with prerequisites and success factors, degree of partnership practice, the outcomes of the partnership relationships, partners’ performance, and efficiency. Indicators and associated methods are proposed for each of these. The framework addresses the evaluation challenges of integrating process and institutional arrangements into performance measurement systems, thus contributing to relationship performance as well as program outcomes. It can also be used to enhance the theory and practice of partnership. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Partnership; Framework; Organization 1. Introduction Throughout the public, private, and non-profit sectors, there is increasing experimentation with the use of partner- ships, alliances, and networks to design and deliver goods and services. Partnership, in particular, is touted as the answer to many public service challenges.1 However, it remains unclear whether or not partnership actually enhances performance, and if so, how? The increase in the rhetoric and practice of partnership is based on the assumption that partnership not only enhances out- comes—whether qualitatively or quantitatively, but it also results in synergistic rewards, where the outcomes of the partnership as a whole are greater than the sum of what individual partners contribute. Some research supports partnership’s contribution to improved performance.2 However, most evidence of inter-organizational partner- ships’ contributions to performance is anecdotal, except in some private sector alliances, where increased efficiencies can be quantified (Shah & Singh, 2001). In short, synergistic results are often sought and referenced, but they are rarely fully articulated and measured (Dobbs, 1999). Furthermore, the process or how to of creating such synergistic rewards are more hopeful than methodical or well understood. Under the new public management, evaluation most often concentrates on results or outcomes. While these are important in ensuring responsiveness, accountability, and quality, they do not tell us much in terms of how to improve public service delivery and enhance efficiency, especially when results are disappointing. Recent inno- vations in the private sector underscore the shortcomings of over-emphasizing or looking exclusively at outcomes, e.g. financial performance, and ignoring process dimensions.3 One danger is sacrificing long-term value creation for short- term performance (Kaplan & Norton, 2001). From a pragmatic perspective, focusing only on results is simply not an effective management approach. While outcomes may be ‘valid as infrequent indicators of the health of entire systems’, they are not useful for ‘making tactical decisions’ or interpreting performance within shorter time frames (Schonberger, 1996, p. 17). 0149-7189/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S0149-7189(02)00017-4 Evaluation and Program Planning 25 (2002) 215–231 www.elsevier.com/locate/evalprogplan * Tel.: þ1-202-994-3598; fax: þ1-202-994-6792. E-mail address: jbrink@gwu.edu (J.M. Brinkerhoff). 1 Partnership is distinguished from other relationship types according to two defining dimensions, mutuality and organization identity, discussed later. 2 For example, Ellinger, Keller, and Ellinger (2000) studied the relationship among departments internal to organizations. They examined the relationship between interaction (meetings and information exchange) and collaboration (teamwork, sharing, and the achievement of collective goals), and performance. They found that while both are positively associated with performance, more specifically, collaboration mediates the relationship between interaction and performance. 3 For a brief review of these, see Kaplan and Norton (1992, 1996) and Schonberger (1996).
  • 2. Good evaluation practice suggests that, ideally, evalu- ation takes into account all key factors that may influence outcomes. This would encompass the institutions and incentives governing the implementation of policies and programs, including informal rules, regulations, controls, and structures (Squire, 1995). These dimensions are crucial components of cause-and-effect linkages that comprise strategy and ultimately lead to performance outcomes (Kaplan & Norton, 2001). Unfortunately, addressing process and institutional arrangements is among the most common difficulties associated with performance indicators and performance monitoring (Funnell, 2000). The fad of performance management frequently ignores these elements. There are several reasons for this; among them are two evaluation challenges, and one reality. First, processes and institutional arrangements are not only difficult to measure, they are sometimes difficult to identify and articulate. Varying degrees of formalization, organiz- ation culture, and the broader social culture, including personal relationships, may yield exceptions to planned procedure. Understanding the gap between implementation plans and actual operations can be difficult. In addition, indicators of such process and institutional features are not easily quantified. These systems are also dynamic, necessi- tating continuous review with periodic adjustments in targets of analysis and program theory assumptions. The second challenge is one of attribution. How can we know that this particular process or institutional arrangement causes this particular outcome? Or is even associated with it? Outcomes are separated causally and temporally from such inputs; attribution requires a sophisticated investi- gation of cause-and-effect relationships that may entail multiple intermediate stages. Even after such analysis, attribution may be problematic. One reality of why processes and institutional arrange- ments are relatively ignored in the current emphasis on performance measurement is simply that they are not immediately exciting. They are not immediate, in that it takes time for these arrangements to become institutional- ized and some element of them will always remain dynamic. They are not exciting because in the eyes of direct program beneficiaries, they are less important than the outcomes themselves. While the general public may persist in this prioritization, it behoves public managers to become more technical and scientific about the way they assess and improve public programs as a means to enhancing these outcomes that are so valued. Do we expect anything less from the private, commercial sector? Consumers may continue to vote with their dollars, but investors want to know that companies are internally efficient and effective, enabling them to sustainably produce valued goods and services, respond to changes in the marketplace, and pursue innovation. Similarly, public managers and policymakers must be accountable not only to the recipients of public goods and services, but also to tax payers, who want to know that those goods and services have come at an efficient price. The purpose of this paper is to propose a framework for assessing partnership work in progress, with an eye to improving partnership practice as a means to enhancing outcomes. Such a task will necessarily entail different layers of assessment, with slightly varying purposes. First, a developmental evaluation approach, i.e. one that seeks to improve work in progress, will be used to dialectically determine indicators, collect data, and assess partnership practice. This approach aims to ensure good partnership practice—consistent with our general knowledge of what partnership means, in order, second, to support a theory- based evaluation, which seeks to test the theory that partnership contributes to performance. Together, the two approaches will help to maximize the effectiveness of the partnership in progress, and in the event the program is not successful, help preclude assumptions that ineffectiveness of the overall program is attributable to theory failure, as opposed to process failure (Birckmayer & Weiss, 2000). In this sense, we need to examine partnership both as a means and an end in itself. The proposed assessment approach and its application seek to: (1) improve partnership practice in the context of program implementation; (2) refine and test hypotheses regarding partnership’s contributions to per- formance and outcomes; and (3) suggest lessons for future partnership work in order to maximize its potential to enhance outcomes. The paper begins with a brief description of the nature and definition of partnership, followed by a review of existing conceptual frameworks that may be useful in assessing partnership work. The proposed framework is then presented, including the general approach, a discussion of what to measure, and a proposed methodology. 2. The nature of partnerships4 Partnership is promoted both as a solution to reaching efficiency and effectiveness objectives, and as the most appropriate relationship as defined by value-laden prin- ciples. Based on a review of the literature (Brinkerhoff, 2002b), the ideal type of partnership can be defined as follows: Partnership is a dynamic relationship among diverse actors, based on mutually agreed objectives, pursued through a shared understanding of the most rational division of labor based on the respective comparative advantages of each partner. Partnership encompasses mutual influence, with a careful balance between synergy and respective autonomy, which incorporates mutual respect, equal participation in decision-making, mutual accountability, and transparency. There are three obvious problems with these ideal-type 4 This section draws heavily from Brinkerhoff (2002a). J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 216
  • 3. definitions: (1) the extent to which they can be operational- ized is unclear; (2) they may not be universally appropriate; and (3) their justification is subjective and values-based. It is, therefore, more appropriate to examine partnership practice on a relative scale, according to more specific definitional dimensions. This allows us to examine empiri- cally the extent to which an inter-organizational relationship is operating like a partnership. Henceforth, the term partnership is used to describe this relative practice. Literature and experience combine to suggest that two dimensions are salient for defining partnership and dis- tinguishing from other relationship types. Mutuality encompasses the spirit of partnership principles; and organization identity captures the rationale for selecting particular partners, and its maintenance is the basis of partnership’s value-added. Mutuality can be distinguished as horizontal, as opposed to hierarchical, coordination and accountability, and equality in decision-making, as opposed to domination of one or more partners. Additional principles (from the ideal-type partnership) include jointly agreed purpose and values; and mutual trust and respect. Mutuality does not imply equal power relations. However, it does seek to highlight the indispensability of each partner (based on organization identity below), which can assist traditionally weaker partners to advocate for greater equality in decision- making. Mutuality refers to mutual dependence, and entails respective rights and responsibilities of each actor to the others (Kellner & Thackray, 1999). These rights and responsibilities seek to maximize benefits for each party, subject to limits posed by the expediency of meeting joint objectives. Organization identity generally refers to that which is distinctive and enduring in a particular organization. It is generally believed that the creation and maintenance of organization identity is essential to long-term success (Albert & Whetten, 1985; Gioia, Schultz, & Korely, 2000). The key is not necessarily to maintain organization systems, processes, and strategies over time, but to maintain the organization’s core values and constituencies. Organiz- ation identity can be examined at two levels. First, the maintenance of organization identity is the extent to which an organization remains consistent, committed, accounta- ble, and responsive to its mission, core values, and constituencies. Second, from a broader institutional view, organization identity also refers to the maintenance of characteristics—particularly comparative advantages— reflective of the sector or organizational type from which the organization originates. A primary driver for partnership is accessing key resources needed to reach objectives, but lacking or insufficient within one actor’s individual reserves. While each actor has their own unique portfolio of assets and skills, generalizations can be made with respect to the comparative advantages of particular types of actors. Both internal and external perceptions of organization identity are important. Internally, a strong sense of organization identity is an essential component of organization effectiveness, particularly with respect to staff commitment and motivation. Externally, two sources of perceptions are salient. First, organizational success is dependent upon the perceptions of the organization’s constituents. Performance definitions increasingly stress assessment from the constituents an organization seeks to benefit (Fowler, 1997). Second, the basis for partnership’s value-added is accessing what external partners perceive to be unique contributions (see later). These defining dimensions help to distinguish part- nership from other relationship types. Other relation- ships may emphasize only one dimension. For example, contracts typically seek to exploit organization iden- tity—purchasing the unique advantages of a particular organization, but incorporate little mutuality, with the terms of the contract determined in advance by the purchasing organization.5 Another common relationship variation is extension, where mutuality may be high, but over time a significant blurring between the organiz- ations develops, where one or more can be said to have lost their organization identity. Such relationships have been documented with respect to non-profits partnering with government (Lipsky & Smith, 1989–1990), donors (Hulme & Edwards 1997), and the private sector (Murphy & Bendell 1997). Finally, relationships charac- terized by low mutuality and low organization identity can be seen as co-optation or gradual absorption. These relationships may begin as partnerships but lose these dimensions over time. The exercise of power is inherent to inter-organizational relationships. Lister (2000) argues that power can be exercised to shape the needs of others, influencing them to pursue behavior in the interests of the power-holder. Such dynamics complicate the identification of partnership practice, confirming the need for broad and diverse participation in assessment processes. Partnership’s defining dimensions form the basis for its value-added. Organization identity is the foundation for partnership. Partnerships with other actors are pursued precisely because these actors have something unique to offer, whether this is resources, skills, relationships, or consent. If organization identity is lost, by definition comparative advantages are lost, the organization loses legitimacy in the eyes of its defined constituencies, and its effectiveness wanes. Absorption, co-optation, bureaucratic creep, or, more broadly, the infiltration of one organizational culture into another— can all lead to a diminished capacity of a partner to maximize its contribution in the longer run (Edwards, 5 Contracts do not always violate the mutuality dimension. As a legal mechanism, a contract can be used to confirm mutually determined agreements in support of a partnership. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 217
  • 4. 1996). There is no longer a strong rationale to justify the extra effort required for partnership.6 Mutuality can reinforce as well as maximize the benefits of organization identity. The opportunity to participate and influence equally means that each actor can more easily protect its organization identity, and hence the efficiency, effectiveness, and synergistic rewards of the partnership. No one organization can understand the implications of its or the partnership’s actions for members’ organization identity. Mutualityatleastaffordspartnerorganizationstheopportunity to consider and explain these implications and potentially defendtheirdistinctive advantages, skills,andlegitimacy—all of which are necessary for the partnership’s success. Mutual- ity also affords opportunities for partner organizations to contribute their skills and other advantages as needed. With mutuality, partners can more easily raise new ideas and propose new, more effective approaches. Mutuality enables partners to contribute to the partnership with fewer constraints (e.g. approvals, scrutiny, regulation and other forms of interference) and greater legitimacy. In addition, mutuality can help to ensure acceptance of the partnership’s policy and procedures, and ease their implementation, when each actor has agreed to them and feels a sense of ownership. Partnerships, like any relationship, are dynamic. While accessing the unique contributions of other actors (i.e. their organization identity) is the primary driver for pursuing partnership, over time other motivators and de-motivators may develop. Especially when partnerships are initiated among partners without any previous history (or where the history may have been conflictive or competitive), the partnership may begin with highly specified roles and responsibilities, with an effort to minimize mutual depen- dence. Starting small, the relationship may then evolve into more complex interactions and inter-dependencies as partners develop mutual understanding and trust. In this sense, the initial drivers for partnership may expand to encompass emerging as well as newly recognized opportunities, which actors may now perceive as lower risk. Alternatively, the initial drivers may dissipate if partnership dimensions (organization identity and mutuality) are not maintained, or if contextual factors render them less relevant. 3. Existing conceptual frameworks for assessing partnership work While the evaluation and performance management literature is replete with discussions of measuring outcomes and results, there is very little written about evaluating or assessing partnership relationships themselves. For example, Provan and Milward (2001) propose a framework for evaluating public sector networks at three levels: the community, the network, and the organization/participant. At the network level, they mainly suggest structural targets of analysis (e.g. number of partners, and multiplexity, or number of connections between organizations), or the outcomes of the network (e.g. the range of services provided). Their framework does little to address the quality of the relationship among organization members and how it can be improved to contribute more effectively to outcomes. Particular fields are struggling with evaluating partner- ship relationships, with some lessons offered. In the health field, identified assessment criteria include: willingness to share ideas and resolve conflict, improved access to resources, shared responsibility for decisions and implementation, achievement of mutual and indi- vidual goals, shared accountability of outcomes, satis- faction with relationships between organizations, and cost effectiveness (Leonard, 1998, p. 5). In the education field, the Ford Foundation Urban Partnership Program provides a process example for assessing partnership relationships (Rendon, Gans, & Calleroz, 1998). While the initial evaluation framework was more directly performance based, an assessment component was later added to examine the history and development of the partnerships, and lessons for partnership design and implementation, among other things. This assessment pursued a process approach, where partner stakeholders determined and mutually agreed on their own indicators for partnership work. From a more general sectoral perspective, the nature of private goods and their bottom line market prices and quantifiable cost structures enable the private sector to rely on straightforward quantitative data sources. For example, Shah and Singh (2001) outline a model for evaluating the performance of supply chains that relies on the time length of various stages in the supply chain, the cumulative cost addition for the raw material, and proportionate cost addition for the various stages, culminating in a cost profile. While the division of labor in partnerships for public service delivery is rarely so straightforward and sequential, such frameworks do suggest identifying quantifiable indicators, where appropriate, to ensure that the added-costs of the relationship do not outweigh its value-added. CIVICUS (the World Alliance for Citizen Participation) looks specifically at civil society and social capital, examining four facets: structure, values, space, and impact (CIVICUS, 2001). This framework is a reminder that partnership approaches are a valued end in themselves. In order to assess their contribution and efficiency, it is necessary to look not only at their structure and impact, but at their operating values as well. 6 Organizations may influence their partners’ organization culture, whether consciously or not, and this influence may be mutual and even desirable. In developing a partnership identity, for example, partner organizations cultivate a shared understanding of the partnership and its vision, in addition to a partnership organization culture. However, to maintain partnership’s value-added, partners must take care to maintain their unique identity and contribution over time, implying that their organization cultures will remain somewhat distinct, particularly in activities beyond the scope of the partnership. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 218
  • 5. A specific framework developed for assessing partner- ships (particularly in international development) comes from the US Agency for International Development’s (USAID) work on inter-sectoral partnerships and the New Partnership Initiative (Charles & McNulty, 1999). Drawing upon the work of other assessment and evaluation tools,7 the proposed framework identifies three dimensions for assess- ment: values and capacity, process, and impact. While some dimensions of the framework (organizational capacity and culture, and communication processes) are immediately relevant to the task at hand, the overall framework emphasizes impact and the external environment to an extent that is beyond the scope of the partnership relationship assessment proposed here. As with the Ford example noted earlier, Charles and McNulty (1999) recommend that member partners participate in determining and selecting precise indicators for each of the assessment categories. They further confirm that partnership indicators are more likely to be qualitative and subjective than quantitative and objective. The discussion-oriented self-assessment (DOSA) Tool (Levinger & Bloom, 1997) demonstrates that industry benchmarks can be established despite the incorporation of self-determined, contextual indicators. DOSA combines a framework of identified capacity targets with self-assessed baseline data, and goal setting. DOSA’s application to a number of US private voluntary organizations has yielded industry benchmark data over several years.8 While there are methodological limitations to this approach, it has arguably encouraged organizations to participate and engage in comparative performance analysis, potentially improving the performance of the industry more broadly. Each of these examples offers lessons in terms of assessment targets and processes. However, none of them is specifically designed and articulated to emphasize the assessment of the partnership relationship as a means to performance outcomes, with an eye to maximizing the effectiveness of partnership relationship practice. 4. The proposed assessment approach Evaluation theory and practice has evolved substantially. It now encompasses process and implementation evalu- ations, in addition to impact and end of project evaluations, and it honors the use of a range of research designs and methods whose choice is driven by evaluation purpose (Dym & Jacobs, 1998). This latter evolution is largely credited to Patton’s (1997) promotion of utilization-focused evaluation. Under this approach, evaluation is judged according to its utility and actual use. This necessarily means that utilization-focused research is highly personal and situational—it emerges through a dialogue with intended users as to their objectives, and the most mean- ingful (i.e. useful) indicators and means of collection and measurement (Drucker, 1990). The proposed assessment approach is continuous, process-oriented and participatory, and developmental, where the assessor assumes the role of a critical friend. The word ‘assessment’ is intentionally chosen over evaluation, as assessment suggests an investigative process that is more exploratory and developmental than confirma- tory (Rendon et al., 1998). The assessment is process- oriented both in the sense that it examines the processes by which partners interact and provide goods and services, i.e. focusing on actual operations and internal dynamics, and in the sense that the specifics of the framework design and implementation are themselves the result of process. Generally, within the parameters of what needs to be measured (to be discussed later), partners determine specific criteria and priorities themselves. Self-stated criteria are particularly important with respect to partners’ goals and indicators of success (Poulin, Harris, & Jones, 2000). Self- determination is a common practice for evaluations aiming to improve performance (Huebner, 2000; Levinger & Bloom 1997; Rendon et al., 1998).9 A process approach serves additional functions as well; it brings conflict into the open, provides a common platform for agreement, and increases the legitimacy of proposed measures (NORAD, 1989). Developmental evaluation (Patton, 1997) refers to evaluation in the context of ongoing program or organiz- ational development. Essentially, the evaluator acts as an organization development consultant, applying evaluative logic to performance assessment and improvements. The purpose of the evaluation is to support program/project, staff, and/or organizational development. According to Patton (1997), ‘The evaluator is part of a team whose members collaborate to conceptualize, design, and test new approaches in a long-term, ongoing process of continuous improvement, adaptation, and intentional change’ (p. 105). Clear, specific, and measurable goals are not the basis up- front, since these can be limiting, goals may vary among team members, and the direction is not necessarily known in advance. Most importantly, the goals of the intended users must form the basis for evaluative indicators and methods. The critical friend model (Rallis & Rossman, 2000) emphasizes this implicit learning function. The objective of the critical friend model is to open the dialogue and blur the 7 These include the Inter-American Foundation’s (IAF) (1999) Grassroots Development Framework; and the Discussion Oriented Organizational Self- Assessment (DOSA), developed by the Education Development Center and Pact with assistance from USAID (Levinger & Bloom, 1997). 8 The comparative data is available at: http://www.edc.org/INT/CapDev/ dosafile/findings.htm (accessed February 18, 2002). The identity of participating organizations is protected. 9 The challenge of self-determined criteria is to sufficiently inform participants’ selection with the evaluator’s expertise, particularly regarding indicator categories and comparative approaches. The effectiveness of this approach depends upon the participants’ respect for the evaluator’s expertise and acceptance of the general framework. Such respect and agreement can divert potential conflict among participants. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 219
  • 6. borders between the act of evaluation and the program being evaluated (Greene, 1990; qtd. in Rallis & Rossman, 2000). Through dialogue, new knowledge is produced collectively. The role of the assessor is to generate data and encourage interpretations that foster learning. At the same time, the assessor must adopt a critical stance that is ‘willing to question the status quo and demand data to guide ethical decisions about change’ (Rallis & Rossman, 2000, p. 84). The relationship of the critical friends (the assessor and partnership actors) is intended to be equitable and reciprocal, where the traditional evaluation power relation- ship is deliberately blurred, and all recognize the unique contributions of others. With this developmental and dialectic approach in mind, the assessment process aims to improve partnership work in progress. 5. Assessment targets Before discussing the general parameters of what should be assessed in a partnership relationship, an important caveat must be mentioned. Because many benefits of partnership work derive from the relationship itself, and because all relationships are dynamic, partnership assess- ment should be seen as an evolving process. While movement is not automatically uni-directional (i.e. always leading towards a positive direction) the potential for partnership’s added value tends to develop over time and experience. This means that partnership cannot be expected to yield immediate results, though this may occur. More likely, it is as partners become more familiar with each other’s strengths, weaknesses, operations, and representa- tives that synergistic rewards will emerge. This process not only entails an increase in mutual understanding, but also trusts building. Since partnerships are dynamic, they have the potential to yield different costs and benefits at different stages of their development. Furthermore, as they become more effective and institutionalized relationships, one should expect a gradual shift in emphasis within the partnership work, from being activity-driven to becoming more strategic, looking and planning for opportunities to yield synergistic rewards. This caveat suggests humility in the expectations of what partnership can deliver in the short- term, and the need for diligence in ensuring that partnership dynamism moves in a positive direction, toward greater understanding, trust, and consequent efficiencies. It also confirms the need to periodically revisit and possibly redesign assessment indicators and processes. The assessment approach seeks to test the theory that partnership can produce an added value beyond other relationship types. Partnership evaluation to date has primarily focused on the causal chain in Fig. 1. The proposed framework emphasizes relationship out- comes, examining the causal chain in Fig. 2. These relationships are moderated by the partnership’s continuous incorporation of success factors and its efficiency. Accordingly, five general areas of assessment are proposed: (1) compliance with prerequisites and success factors in partnership relationships, (2) the degree of partnership practice, (3) outcomes of the partnership relationship, (4) partners’ performance, and (5) efficiency. These five categories are inextricably linked and some overlap. Targets of analysis for each area, along with proposed evaluative methods are specified in Table 1. These targets constitute the general framework within which participants will negotiate and determine precise indicators most meaningful to them. A summary of each target area appears in the boxes later. 5.1. Pre-requisites and success factors of partnership relationships Pre-requisites to effective partnership relationships (Box 1) include partners’ tolerance for sharing power, and willingness to adapt their operations and procedures to facilitate the partnership’s performance (Brinkerhoff, 2002b). The presence of a partnership champion (and whether champions exist within each partner organization) is another facilitative factor. Champions are entrepreneurial individuals who advocate on behalf of the partnership and the partnership approach within their home organizations, Fig. 1. Traditional causal chain. Fig. 2. Causal chain for relationship outcomes. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 220
  • 7. Table 1 Summary of proposed assessment targets and methods Category/targets Methodology I. Presence of prerequisites and success factors Partner interview A. Pre-requisites and facilitative factors Partner survey † Perceptions of partners’ tolerance for sharing power † Partners’ willingness to adapt to meet partnership’s needs † Perception of receptivity to new solutions to improve the partnership, its value, and day-to-day performance † Speed and flexibility in addressing the need for corrective action † Accommodation of special requests among the partners † Responsiveness of partners to unforeseen situations † Existence of partnership champions † Existence of champions within each partner organization and within the partnership as a whole † Focus of champion’s advocacy (internal to a partner organization, within the partnership, externally) B. Success factors from the literature † Trust Partner interview † Character-based: perceptions of integrity, honesty, moral character, reliability, confidentiality as appropriate, etc. Partner survey † Competence-based: perceptions of competence in prescribed/assumed skill areas, business sense, common sense, judgment, knowledge, interpersonal skills, understanding of partnership, etc. † Confidence: standard operating procedures, contractual agreements and their degree of formality † Senior management support † Direct participation † Provision of resources and support to organization members participating in the partnership † Ability to meet performance expectations † External constraints † Partner capacity † Clear goals Partner survey † Consistent identification of partnership goals and mission † Regular partner meetings to review, revise, and assess progress in meeting identified goals † Shared common vision for the partnership † Mutually determined and agreed partnership goals † Partner compatibility Partner interview † Knowledge and understanding of partners’ mission, operations, and constraints Partner survey † Previous conflict or confrontations among partners † Compatible operating cultures (e.g. operating philosophies, management styles, teamwork) Process observation and assessment Partner survey † Compatible constituencies Partner survey † Compatible core values † Mechanisms to address incompatibilities † Conflict † Degree † Frequency † Extent of conflict avoidance within partnership † Presence/absence of one or more dominating partners II Degree of partnership Process observation and assessment Partner identification and assessment of indicators A. Mutuality † Mutuality and equality Partner interview † Equality in decision making Partner survey (continued on next page) J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 221
  • 8. Table 1 (continued) Category/targets Methodology † Democratic procedures † Satisfaction that all views are considered † Joint determination of program activities and procedures † Process for determining division of labor and risk/reward balance Partner interview † Resource exchange Process observation and assessment † Relative balance Partner survey † Nature of resources exchanged † Reciprocal accountability Partner survey † Regular reporting among partners † Access to performance information † Financial controls balanced with administrative imposition † Joint design of evaluations/assessments † Transparency Partner survey † Established channels for continuous dialogue and information sharing † Timely response to information requests † Sharing of relevant information beyond specified agreements/requirements † Partner representation and participation in partnership activities † Participation in planning and review meetings † Program activities † Partner satisfaction with opportunity to participate † Rules governing who can represent the partnership, within what limits Partner interview † Mutual respect Partner survey † Consideration of partners and convenience in the planning of meetings and other organizational requirements † Recognition of indispensability of each partner, including unique strengths † Shared understanding of respective partner drivers † Even benefits † Perception of fairness † Satisfaction with benefit distribution † Satisfaction with the criteria for benefit distribution B. Organization identity † Determining partner organization identities Partner interview † Mission † Major strengths and weaknesses † Primary constituents † Underlying values † Organization culture † Methods for assessing mission attainment and maintenance of all of the above † Organization identity within the partnership † Perception of threats or compromises of organization identity within the partnership † Nature of organization adaptations/adjustments in order to effectively promote and participate in the partnership † Perception of partners adjustments in response to expressed concern about organization identity † Extent to which organization has changed as a result of partnership participation and quality of that change † Influence of partnership work on partner organizations’ service quality and responsiveness to core constituencies Partner survey † Influence on and use of core constituencies † Perceptions regarding the extent of mutual adaptation † Perceptions of overall impact of partnership work on organization identity III. Outcomes of the partnership relationship 1. Value-added Partner interview † Qualitative synergistic outcomes of program Partner survey † Quantitative synergistic outcomes of program Process observation and assessment (continued on next page) J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 222
  • 9. within the partnership as a whole, and externally. Cham- pioning capacity not only entails communication, nego- tiation, and organizational skills, but also perceived legitimacy among partners and stakeholders. Building from the literature, partnership effectiveness can be gauged by the extent to which the relationship complies with identified best practice. This is somewhat of a controversial assessment target, since many of these features lack empirical support. However, there is some emerging consensus on what at least some of these success factors may be. For example, Whipple and Frankel (2000) surveyed business leaders in the food, and health and personal care industries regarding their conceptions of alliance success factors. Of a list of 18 factors generated from an extensive literature review, they found general consensus around five (though the ordering of these varied). These five factors were: trust, senior management support, ability to meet performance expectations, clear goals, and partner compatibility. An additional factor, which merits examination, is conflict. Several of the success factors can be deconstructed to enhance their specificity and explana- tory value (Box 2). Trust can be based on either the character or the competence of participating individuals and organizations (Gabarro, 1987), and can also be distinguished from confidence. Trust is voluntary, linked to shared values, and is distinct from and potentially incompatible with confidence (Tonkiss & Passey, 1999). Contrary to an ethical Table 1 (continued) Category/targets Methodology † Linkages with other programs and actors † Enhanced capacity and influence of individual partners † Other multiplier effects 2. Partners meet own objectives Partner interview † Satisfaction with progress in meeting identified drivers Partner survey † Qualitative and quantitative evidence of meeting drivers † Enhanced performance in pursuing own mission † Enhanced performance in satisfying constituencies 3. Partnership identity † Partnership organization culture Process observation and assessment. † Values † Partnership mission, comparative advantages, value-added Partner interview † Name recognition (e.g. stakeholder feedback, publicity, logo, web page) Process observation and assessment. † Partnership constituencies IV. Partner performance A. Partners and partner roles enacted as prescribed or adapted for strategic reasons Review of project proposal Partner interview Partner survey B. Partner assessment and satisfaction with their partners’ performance † Compliance with expected and agreed roles Process observation and assessment Partner interview Partner survey † Satisfaction of partners with each other’s performance Partner interview † Partner performance beyond the call of duty (i.e. extra-role behavior) Partner survey V. Efficiency and strategy Partner interview † Identification of critical factors influencing partnership’s success † Extent to which these are continuously monitored † Extent to which these are strategically managed Box 1 Pre-requisites and facilitative factors † Tolerance for sharing power † Willingness to adapt to meet partnership’s needs † Receptivity to new solutions † Flexibility in taking corrective action † Accommodation of special requests † Responsiveness to unforeseen situations † Existence of champions † Location † Focus of advocacy Box 2 Success factors † Trust (character and competence) † Clear goals † Confidence † Partner compatibility † Senior management support † Conflict † Ability to meet expectations J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 223
  • 10. basis, confidence is based on rational expectations, typically grounded in institutional arrangements, such as contracts, regulations, and standard operating procedures (Luhmann, 1988). Specific partners may have a particular preference for confidence over trust-based mechanisms. Presumably, that preference may change with time and repeated interaction among partners, as they accumulate experience demonstrating partner dependability and trustworthiness (Ostrom, 1990). In fact, Handy (1988) measures the level of trust in a partnership by the inverse variable of level of control, as indicated, for example, by reporting and approval requirements (qtd. in Malena, 1995, 12). Senior management support contributes to partnership performance both directly and indirectly. Directly, such support translates into resource commitments (e.g. financial, personnel, etc.) and often entails flexibility and consequent timesavings in terms of making adaptations to standard procedures to accommodate partner preferences and con- straints, or to maximize partnership performance. Indirectly, the participation and support of senior management symbolizes the organization’s commitment to the partner- ship and its success, contributing to trust building among partner organizations. The ability of a partnership to meet performance expectations can be examined at two levels. Individual partner performance is discussed later. Ability to meet performance expectations also refers to the existence of constraints beyond the control of the partnership, which inhibit its performance. These might include, for example, legal or regulatory policies imposed by a funder or government agency. Another constraint that should be assessed and monitored is whether or not the partnership or member organizations possess the necessary skills and capacity. For example, sometimes partners are selected for their relationship and legitimacy vis-à-vis key stakeholders, but may lack organization capacity. This does not prohibit partnership success, but it does identify areas for capacity investment, and can significantly increase the complexity of partnership implementation. Clear goals are an important target of assessment both in terms of outcome and process. With respect to the former, it is important that all partners understand the partnership’s goals (an indicator of partnership identity, discussed later) and share a common vision for the partnership, and that goals be clear so as to facilitate assessment. From a process perspective it is important that the mission, vision, and goals be mutually determined and agreed; this enhances the likelihood of goal attainment and the partners’ commitment (Leonard, 1998). Partner compatibility also encompasses a range of factors. The more partners know and understand of each other’s mission, track record, operations, and constraints in advance of the partnership, the less learning and trust building has to occur in the context of implementation. The evolution of this understanding is a key target of analysis. The speed of understanding and trust building is mediated by the partners’ previous experience. If the partners have experienced conflict or confrontations in the past, it will likely take much longer to become compatible partners. Most importantly, partner compatibility implies that partners do not fundamentally inhibit the organization identity of themselves or their partners. For example, do partners share core constituencies or at least not serve conflicting ones? Are core values among the partners contradictory? And if there are contradictions, can the relationship be justified for a greater good that serves the organizations’ missions? Are mechanisms in place to guard against compromising identity due to these incompatibilities? Finally, conflict is an obvious target for assessment. However, it is not as straightforward as might be presumed. The absence of conflict may imply that mutual influence is compromised or non-existent (see, for example, Brown & Ashman, 1996). Lister (2000) draws upon Lukes’ (1974) notion of power as ‘socially structured and culturally patterned behavior’ (22) to demonstrate how power can be exercised to shape the needs of others, influencing them to pursue behavior in the interests of the power-holder. Thus, consensus may imply a deeply ingrained power play. Assessing the manifestation of such power plays is not only highly subjective, it would be near impossible to determine and would likely generate conflicting interpret- ations. Still, it is a caveat worth noting in reviewing the extent to which partners are maintaining their own identity within the partnership. This is largely determined by the existence of one or more dominating partners. 5.2. Measuring the degree of partnership Partnership practice should be assessed on a relative scale, because: desired goals and relationship preferences of partners will vary; the ideal-type partnership may be impossible to fully implement; and judgments of compli- ance with this model are extremely subjective. The degree of partnership can be assessed according to the presence of its defining dimensions: mutuality and organization identity. These dimensions are also contextually determined; specific and meaningful indicators are best left to the partners to determine. However, it is possible to recommend some sample indicators as suggested by the literature and practice to date. 5.2.1. Mutuality Some of the most common indicators for mutuality (see Box 3) include equality in decision making, resource exchange, reciprocal (as opposed to hierarchical) account- ability, transparency, and degree of partner representation and participation in partnership activities. Equality in decision-making is a challenge from the start, particularly if there is a power imbalance among partners. Power imbalances generally originate from one partner controlling the majority of the resources. When this is the case, true J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 224
  • 11. equality in decision-making can be skewed, whether because the more powerful partner takes charge, or, more subtly, because the less powerful partners defer to that partner’s wishes so as not to jeopardize future resource flows. This underscores the importance of resource exchange. It is important to recognize that not all resources are material. In other words, contributions can entail the hard resources of money and materials, as well as important soft resources, such as managerial and technical skills, information, contacts, and credibility/legitimacy. Mutuality implies mutual dependence among partners due to the unique and indispensable contributions each of them makes. With reciprocal accountability each partner takes responsibility and is accountable to the others for its actions and their potential impact on the partnership (Commins, 1997). Reciprocal accountability means that partners have access to performance information of the overall partnership and its individual partners on a regular basis and/or upon request. Consequently, accountability is closely related to transparency. Partners do not need to know everything about each other, but in partnerships they should be open and honest about areas of common concern or any information that can potentially influence partnership effectiveness and efficiency. Transparency is most commonly operationalized as formal information exchange requirements and response to specific information requests. Transparency can also be less formal and/or structured, such as impromptu telephone calls, e-mails, and conversations. Providing accurate and timely information is both a professional duty and an expression of respect (Peterson, 1997). This includes making relevant information available in an accessible manner, in the appropriate language, and with minimal use of terminology specific to a particular professional culture that excludes or is inconvenient to one or more partners. Partnership should entail full participation of all member partners, according to their comparative advantages and agreed roles. This includes decision making, as above, as well as participation in meetings, relevant discussions, and program activities. Mutual respect is also a key component of mutuality in partnership. Mutual respect rests on an explicit recognition of the indispensability of each partner and its contribution. Partners are aware of each of their partner’s unique strengths and seek to effectively incorpor- ate these into the partnership work. Mutual respect presumes that all negotiation and agreements are made in good faith, implying full disclosure of actor-specific objectives. Mutual respect is manifested in the extent to which each partner considers the implications of its actions for the other partners. This includes the scheduling of meetings, reporting requirements, and sensitivity to key relationships and potential conflicts. Finally, mutuality encompasses mutual benefit and risk sharing: all partners share the risks and the glory of their partnership work. This does not necessarily mean that partners benefit equally. Absolute equality in this sense would be extremely difficult to attain. Executives and managers in the private sector acknowledge that success deriving from alliances is based on a ‘relatively even, but not equal exchange of benefits and resources’ (Whipple & Frankel, 2000, p. 21). Partners will need to determine for themselves if they are satisfied with the relative evenness of the benefits and costs of the partnership work. 5.2.2. Organization identity Partnership work inevitably entails adaptation. How- ever, value-added is contingent upon each organization balancing these adjustments with the maintenance of their organization identity. In order to assess the preservation of members’ organization identity within the partnership, it is first necessary to determine precisely what that identity is. This may entail a process of self-awareness promotion, where partners are asked to identify their mission, core constituencies, underlying values, and organizational culture. In other instances, partners may already be self-aware, implying a stronger identity from the start. Key areas of assessment with respect to organization identity main- tenance include the degree of reciprocal adaptation for the purpose of protecting organization identities while maximizing their benefit to the partnership, maintenance of service quality and responsiveness to partners’ constituencies, and maintenance of quality and focus on partners’ comparative advantages (Box 4). 5.3. Outcomes of the partnership relationship Relationship outcomes relate to the partnership’s value- added. Value-added seeks to confirm and articulate that the partnership as a whole yields more than what would have resulted from the partner organizations operating indepen- dently. Because each partnership is unique in its compo- sition and programmatic goals, it is impossible to identify specific cross cutting value-added indicators. Furthermore, as partnerships are dynamic and many are experimental, it is also difficult to specify value-added indicators a priori. However, evidence of value-added, whether aspired or identified after the fact, can be categorized as follows (Box 5). Value-added may include qualitative or quantitative synergistic outcomes of the program itself (i.e. aspects of program performance that relate to advantages beyond what the actors could have independently produced), linkages with other programs and actors, enhanced capacity and Box 3 Degree of partnership: mutuality † Mutuality and equality (self-determined) † Transparency † Equality in decision making † Partner representation & participation † Resource exchange † Mutual respect † Reciprocal accountability † Even benefits J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 225
  • 12. influence of individual partners, and other multiplier effects such as program extensions and replication, new programs, etc. These may be expected, hoped for, or unforeseen outcomes of partnership work. For example, Brown and Ashman (1996) identify the following potential multipliers of inter-sectoral problem solving at the grassroots level: the creation and strengthening of local organizations, expanded activities and credibility of bridging NGOs, and the establishment of norms of reciprocity, cooperation, and trust among previously unrelated or antagonistic parties (1477). Value-added is difficult to confirm since attribution is problematic. Some indicators are more easily verified than others. Hence, evaluating partnership value-added is primarily (though not exclusively) perception- and consensus-based, and is often closely related to partner satisfaction. Another element of the effectiveness and outcomes of the partnership is the extent to which individual partners meet their own objectives through the partnership. Since partner- ship requires extra effort and is based on the will of the partners to engage as partners, individual partner drivers should be identified and their satisfaction assessed. This is an indication that the partnership, or at least a particular partner’s participation, will be sustainable. Partner drivers will presumably include enhancing the organization’s performance in pursuing its own mission and satisfying its constituencies. The underlying theory is that if partnership is appropriately established and effectively managed, it should improve performance for all partners (Lambert, Emmelhainz, & Gardner, 1996, p. 11). Finally, a successful partnership relationship is one that has developed its own partnership identity. This identity is the glue that holds the partners together and forms the basis for legitimacy and values identification of its major stakeholders. Partnership identity entails an identifiable organization culture, complete with processes and mechan- isms reflective of the partnership’s underlying values; a unique, and identifiable mission, with associated compara- tive advantages and value-added; and a set of constituencies that may go beyond the constituencies of individual partner organizations. 5.4. Partner performance Targets for partner performance are summarized in Box 6. Some aspects of partner performance can be assessed objectively, by comparing whether or not the partnership encompasses the partners and prescribed roles that were anticipated, or if not, whether or not changes were made in the service of overall objectives as a form of strategic adaptation. In addition to noting whether or not partners performed the roles prescribed (or subsequently agreed), it is important to assess whether or not they did so effectively and efficiently. Portions of such an assessment are contextual. In assessing relationships, the most important indicator of partner performance is the other partners’ satisfaction with that performance. In business alliances, successful companies are twice as likely to assess their partners’ performance than less successful companies (Harbison & Pekar, 1998). Partner performance entails an independent assessment of partner contributions in accord- ance with program design and partner agreements, as well as a mutual assessment among the partners of each partner’s performance. Discrepancies among these assessments are an important end in themselves, pointing to a need for better information sharing and trust building. Partner performance assessments should also note whether a partner acted above and beyond the call of duty in promoting and performing within the partnership. 5.5. Efficiency Assessing the efficiency of the partnership relationship Box 5 Outcomes of the partnership relationship 1. Value-added † Enhanced performance in pursuing own mission † Qualitative and quantitative synergistic program outcomes † Enhanced performance in satisfying constituencies † Linkages with other programs and actors 3. Partnership identity † Enhanced capacity & influence † Partnership organization culture † Other multiplier effects † Values 2. Partners meet own objectives † Partnership mission & value-added † Satisfaction in meeting identified drivers † Name recognition † Evidence of meeting drivers † Partnership constituencies Box 4 Degree of partnership: organization identity † Determining partner organization identities † Extent and quality of organizational change † Influence on partners’ service quality and responsiveness to core constituencies † Perception of threats or compromises † Influence on and use of core constituencies † Nature of organization adaptations/adjustments † Perceptions of mutual adaptation † Perception of partners’ adjustments in response to expressed concern † Perceptions of overall impact on identity J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 226
  • 13. implies indicators for monitoring, maintaining, and improv- ing the partnership and its contribution to effectiveness and impact. These relate to the broader subject of strategic management. All organizations and programs face varying degrees of environmental hostility, both internally and externally. This aspect of partnership assessment addresses the extent to which there is environmental hostility toward the partnership program and approach, and the extent to which this hostility is proactively managed (Box 7). That is, is the partnership strategically managed such that all opportunities to reduce operating costs, such as those derived from environmental hostility, are pursued? Relevant components of environmental hostility will vary from partnership to partnership. In some instances it may be inefficient to invest resources in influencing particular environmental factors. For this reason, partner- ship leaders should first determine which of these hostile factors are critical, and, of these, which of them can be appreciated, influenced, or controlled within a reasonable cost. In some instances managing these critical factors may entail no or only marginal additional costs. For example, leaders should ensure that incentives and drivers for partnership champions and partnership organizations are always clearly identified and emphasize how the partnership responds to these. Such efforts can be pursued, for example, in the context of day-to-day management and communi- cations. Sample factors determining environmental hostility include: presence or potential of partnership champions; existence, effectiveness, and efficiency of institutional linkages among partners; capacity, commitment, strong organization identity, and compatibility of partner organizations; extent to which there is a ready demand for partnership products and services; homogeneity and degree of organization among partner- ship stakeholders and constituents; degree to which legal frameworks are facilitative or inhibiting; and stability of the partnership’s internal and external environments (Brinkerhoff, 2002b). 6. Proposed assessment methodology The proposed assessment methodology conforms to the critical friend and developmental models described above. It is also multi-faceted. There are three primary methods proposed. A summary of the application of each to the assessment targets appears in Table 1. Sequencing of these methodologies is important to their iterative contributions. The partner survey will encourage participants to begin to reflect on the issues. The survey and an initial baseline assessment and process observation will inform the partner interviews to follow. Subsequent applications of the three methodologies will build upon the results of each. Process observation will be continuous, with scheduled reporting and joint analysis determined by the participants. Based on these results, participants can negotiate the need for subsequent interviews and surveys. An application and analysis of all three methods would be expected at key moments in the life of the program, such as the mid-point and closing, or around critical events. 6.1. Process observation and assessment This method will include a review of project documen- tation and reports, observation of management meetings and program activities, and analysis of all data, including those collected by the methods later. The assessment aspect of this method will include an initial summary and analysis, to be supplemented by interactive feedback and interpretation sessions with partnership actors collectively. Such sessions will more directly address the developmental aspects of the assessment, in the service of improvements and learning. 6.2. Partner survey A survey will be administered to partner organizations and staff. The survey will consist primarily of closed-ended questions. Many of these will address quantitative ordinal scales. The partner survey can later be adapted, as needed and appropriate, and re-administered periodically through- out the lifetime of the partnership or for the determined length of the assessment process. Subsequent surveys will be informed by the partner interview, agreed indicators, and the results of process observation and assessment. In particular, follow-up questions for baseline data and subsequent monitoring will be developed and incorporated into later surveys. 6.3. Partner interviews Representatives of each partner organization will be interviewed to determine baseline information and potential indicators for definitions of mutuality and equality, partner organization identity and its maintenance within the partnership, partnership value-added, partner objectives, partnership identity, partner performance, and efficiency Box 7 Efficiency † Identification of critical factors influencing partnership’s success (self-determined) † Extent to which these are continuously monitored † Extent to which these are strategically managed Box 6 Partner performance † Partner roles enacted as prescribed or adapted for strategic reasons † Compliance with expected & agreed roles † Satisfaction with partners’ performance † Partner performance beyond the call of duty (i.e. extra-role behavior) J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 227
  • 14. and strategic management of the partnership. The interviews will be semi-structured, with a combination of closed- and open-ended questions. Initial interviews will be informed by the results of the first partner survey. Appropriate repre- sentatives will be determined by the partner organizations themselves and may entail one or more individuals as appropriate. Feedback on potential indicators will form the basis for applying an adaptation of the Delphi technique in order to specify indicators agreeable to all. The Delphi technique is a “group process technique for eliciting, collating, and generally directing informed (expert) judgment towards a consensus on a particular topic” (Delp, Thesen, Motiwalla, & Seshadri, 1977, p. 168). It typically consists of anonymous input on a range of issues for which consensus is sought. Several rounds of input and feedback, typically through mail, are conducted, with data collected, collated, and analyzed to inform subsequent rounds until consensus emerges (or disagreement is highlighted). The proposed Delphi adaptation will entail initial data collection through partner interviews, with subsequent e-mail ranking and feedback, potentially to a broader group than those initially interviewed. Final presentation and agreement on suggested indicators will occur during a face-to-face, interactive meeting. 7. Review and next steps The proposed partnership relationship framework addresses the evaluation challenges of integrating process and institutional arrangements into performance measure- ment systems, thus contributing to relationship performance as well as program outcomes. It also potentially enhances the theory and practice of partnership. The developmental model and critical friend approach address the challenge of identifying, articulating, and measuring processes and institutional arrangements by: (1) maintaining a continuous assessment presence—whether directly by the assessor or indirectly by partnership members who have been sensitized to measurement, assessment, learning, and associated agreed targets of analysis; and (2) through dialogue to ensure shared under- standing, and to create new knowledge. Within this framework, relevant indicators, both qualitative and quan- titative, can be jointly developed and measured through both intensive, open-ended interviews and standardized ques- tionnaires. Feedback and assessment sessions allow for periodic adjustments in targets of analysis and the particulars of the program theory regarding partnership’s contributions to performance. These sessions enable adjustments to processes and behaviors that can improve the relationship and project performance. The mutual understanding and trust building that can emerge from such an assessment process can also lead to the identification of additional objectives and opportunities within the partnership. Alternatively, such a process may provide a mechanism for actors to readily gauge their satisfaction (or dissatisfaction) with the relationship and prospects for improved relationships and practice, potentially leading to a decision to terminate the relationship. That is, as the partnership as a whole is assessed, each partner will also have a framework in which to conduct their own cost-benefit analysis of the partnership work, whether formally or informally. The framework also begins to address the problem of attribution by maximizing the partnership’s compliance with the definitional dimensions of partnership, with specific characteristics of the relationship determined by the actors themselves. This enables the assessor to more accurately determine if any program failures or inefficien- cies are due to the program implementation itself or inadequacies in the partnership relationship. Attribution cannot be definitively determined. However, such a process can reduce some of the noise inherent to attribution challenges, and can be used to further refine our theory regarding partnership’s contribution to performance. As knowledge emerges about how relationship attributes are enhancing performance, these attributes can be more specifically targeted and enhanced. By combining standardized assessment targets with self- determined indicators and interpretation, the proposed framework is well positioned to follow the DOSA model. Comparative data from a number of partnership experiences could conceivably contribute to a benchmarking effort, which could inform partnership actors beyond those participating, and potentially contribute to a wider appli- cation of the proposed assessment framework, more attention to relationship outcomes, and partnership per- formance. Partnership practitioners would benefit from exploring how other partnerships have specified and measured partnership performance indicators. Benchmark- ing could also assist actors to manage expectations as partnerships evolve. Comparative data analysis might serve to identify cycles of performance, common challenges, and best practices. The reality remains that without some enlightenment of program and organizational leaders and managers, funders, constituents, and perhaps the general public, process evaluation and the assessment of institutional arrangements are not likely to be mainstreamed as an essential component of performance management and evaluation. However, efforts such a the one described in this paper are likely to move this agenda forward in terms of ideas, conceptual frameworks, and the how tos of such assessments, and hopefully in promoting an accumulation of experience with these. In particular, it is hoped that this framework and its subsequent application will shed light on our understanding of partnership and its effectiveness as an institutional arrangement for getting results. The partnership rhetoric is strong; the practice has been relatively weak. Frameworks J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 228
  • 15. such as the one prescribed here can promote: (1) a more refined understanding of partnership in general and how it differs from other institutional arrangements; and (2) a more practical determination of what partnership can mean in the context of particular programs and relationships, i.e. as determined and jointly agreed by members. Furthermore, such frameworks can contribute substantially to the identification and measurement of partnership attributes, such as mutuality and identity, and partnership value-added. The private sector provides a salient lesson on this point. The importance of intangible assets has increased substantially (more than 80% of companies’ book value by the end of the 20th century). While managers recognized this evolution, until more sophisticated performance measurement frameworks were developed, “They could not manage what they could not describe or measure” (Kaplan & Norton, 2001, p. 88). Similarly, the inability to articulate features of partnership and its contribution to performance has heretofore discouraged its effectiveness and the investments necessary to attain its value-added. The application of the proposed partnership relationship assess- ment framework will assist future efforts to design and implement effective partnership relationships, as well as promote partnership practice and the maximization of its contribution to outcomes. 8. Lessons learned This framework was originally developed for application to a Federally funded consortium of non-profits and private consulting firms. The consortium management committee rejected the framework as proposed, with claims, among others, that it was insufficiently specific. The framework’s introduction and subsequent developments yield three important lessons for evaluating partnership relationships. First, people continue to be uncomfortable with address- ing issues of trust and other relationship dynamics. Participants emphasized the need to focus on indicators of program performance. For the most part, such reactions can represent discomfort with the new or different, pointing to a need for the assessor to clearly explain the framework in a face-to-face setting, which in this case was not possible. Another complaint was that participants would be uncom- fortable addressing direct questions about their perceptions and feelings vis-à-vis partners’ trustworthiness and compe- tence. Such discomfort may be unavoidable. However, two possible responses are to: (1) introduce these issues in non- threatening, positively framed language, and (2) stress the assumption that the assessor and participants are trust- worthy, interested in partnership performance, and will avoid blaming behavior. Second, the resistance and ultimate rejection of the framework confirm the need for champions for such efforts. In this instance, the original champion of relationship assessment convinced consortium members, as well as the contracting federal agency of the merit of such an approach, both for improving the performance of this program, and for identifying lessons for subsequent contracted programs. The assessment was thus written into the approved program proposal and resulting contract. Subsequently, the assess- ment was left without a champion and without a shared understanding of the merit of the exercise. The original champion’s home organization withdrew from the program early on due to changes in the policy environment, and turnover among the federal oversight staff eliminated the institutional memory supportive of the initial decision. Consequently, when discomfort emerged, remaining par- ticipants were quick to reject the proposal. A replacement assessor, with minimal evaluation experience, was hired part-time. This hiring demonstrates a substantially reduced commitment to evaluation in general and implies reluctance to follow-through with the original and agreed plan. Furthermore, in an exit interview, I was told there was already dissatisfaction with the efforts of the new evaluator (after only 1 month). It became obvious that the participants were unclear about what they wanted and what was required of them. Since the contracting agency had not pressed them on the issue, they seemed to be searching for ways to meet the minimum requirements of the agreed program proposal. At this point, a champion could assist the consortium management, as well as the contracting agency representative, to at least agree upon the objectives and a potential revision of the assessment requirement. During the exit interview, a third lesson emerged. My lack of technical sectoral expertise (e.g. health, environment, agriculture) and its absence in the framework was repeatedly emphasized. I had raised this from the beginning during the hiring process; it did not seem to be of concern at the time, and was not a part of my scope of work. The dissatisfaction with my lack of expertise could be interpreted as a ploy to reject the framework and end the initiative. However, sectoral expertise can be useful. For example, in process observation, without substantive program expertise it could be difficult to determine if ideas are rejected based on lack of technical merit, or if rejection reflects a power dynamic among consortium members. Technical expertise can be accessed through the critical friend approach, where the assessor works closely with participants in interpreting data. The assessor should alert participants to the need to work with each other and with the assessor to ensure proper interpretation of events and indicators. These challenges, lessons, and recommendations have been confronted in similarly complex evaluation efforts, such as comprehensive community initiatives. In particular, Brown (1995) outlines similar political and resistance dynamics and confirms evaluators’ increasing need for three skills: pedago- gical, political, and trust building. Programs increasingly operate at multiple levels with diverse stakeholders, whose relations cannot be divorced from program performance. In order to capture these dynamics, Connell and Kubisch (1999) J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 229
  • 16. propose a theory of change evaluation approach, which, when combined with the aforementioned skills, may assist evalua- tors to make progress in their design and application of learning-based evaluation approaches that would provide a better understanding of the role of multi-party relationships in improving program performance. References Albert, S., & Whetten, D. A. (1985). Organization identity. In L. L. Cummings, & B. M. Staw (Eds.), Research in organizational behavior, (Vol. 7) (pp. 263–295). Greenwich, CT: JAI Press. Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in practice: What do we learn? Evaluation Review, 24(4), 407–431. Brinkerhoff, J. M. (2002a). Government-NGO partnership: A defining framework. Public Administration and Development, 22(1). Brinkerhoff, J. M. (2002b). Partnerships for international development: Rhetoric or reality. Boulder, CO: Lynne Rienner Publishers. Brown, P. (1995). The role of the evaluator in comprehensive community initiatives. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives. Volume 1: Concepts, methods, and contexts, Washington, DC: The Aspen Institute. http://www.aspenroundtable.org/vol1/index.htm, Cited February 17, 2002.. Brown, D. L., & Ashman, D. (1996). Participation, social capital, and intersectoral problem-solving: African and Asian cases. World Development, 24(9), 1467–1479. Charles, C., & McNulty, S. (1999). Partnering for results: Assessing the impact of inter-sectoal partnering. Washington, DC: Agency for International Development. CIVICUS (the World Alliance for Citizen Participation) (2001). CIVICUS index on civil society project, http://www.civicus.org/index/ diamondhome.html. Commins, S. (1997). World vision international and donors: Too close for comfort? In D. Hulme, & M. Edwards (Eds.), NGOs, states and donors: Too close for comfort? New York: St Martin’s Press in association with Save the Children Fund. Connell, J. P., & Kubisch, A. C. (1999). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In K. Fullbright-Anderson, A. C. Kubisch, & J. P. Connell (Eds.), Theory, measurement, and analysis, (Vol. 2). Washington, DC: Aspen Institute. http://www. aspenroundtable.org/vol2/index.htm Cited February 17, 2002.. Delp, P., Thesen, A., Motiwalla, J., & Seshadri, N (1977). System tools for project planning. Bloomington, IN: Program of Advanced Studies in Institutional Building and Technical Assistance Methodology, Inter- national Development Institute, Indiana University. Dobbs, J. H. (1999). Competition’s new battleground: The integrated value chain. Cambridge, MA: Cambridge Technology Partners. Drucker, P. (1990). Managing the nonprofit organization: Principles and practices. New York: Harper Collins. Dym, B., & Jacobs, F. (1998). Taking charge of evaluation. The Nonprofit Quarterly, 5(3). Edwards, M. (1996). Too close for comfort? The impact of official aid on nongovernmental organizations. World Development, 24(6), 961–973. Ellinger, A. E., Keller, S. B., & Ellinger, A. D. (2000). Developing interdepartmental integration: An evaluation of three strategic approaches for performance improvement. Performance Improvement Quarterly, 13(3), 41–59. Fowler, A. (1997). Striking a balance: A guide to enhancing the effectiveness of NGOs, in International Development. London: Earth- scan Publications. Funnell, S. C. (2000). Developing and using a program theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87, 91–101. Gabarro, J. J. (1987). The development of working relationships. In J. W. Lorsch (Ed.), Handbook of organizational behavior. Englewood Cliffs, NJ: Prentice Hall. Gioia, D. A., Schultz, M., & Korley, K. G. (2000). Organizational identity, image and adaptive instability. Academy of Management Review, 25(2), 65–81. Greene, J. C. (1990). Three views on the nature and role of knowledge in social science. In E. G. Guba (Ed.), Paradigm dialog. Thousand Oaks, CA: Sage. Handy, C. (1988). Understanding voluntary organisations. London: Penguin Books. Harbison, J. R., & Pekar, Jr P. (1998). Institutionalizing alliance skills: Secrets of repeatable success. Strategy and Business, Second Quarter. Huebner, T. A. (2000). Theory-based evaluation: Gaining a shared understanding between school staff and evaluators. New Directions for Evaluation, 87, 79–89. Hulme, M., & Edwards, D. (Ed.), (1997). NGOs, states and donors: Too close for comfort? New York: St. Martin’s Press in association with Save the Children. Inter-American Foundation (IAF) (1999). The grassroots development framework: Project objectives, baseline data, and results report. Arlington, VA: Inter-American Foundation. Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard: Measures that drive performance. Harvard Business Review, January–February, 71–79. Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Boston, MA: Harvard Business School Publishing. Kaplan, R. S., & Norton, D. P. (2001). Transforming the balanced scorecard from performance measurement to strategic management: Part I. Accounting Horizons, 15(1), 87–104. Kellner, P., & Thackray, R. (1999). A philosophy for a fallible world. The New Statesman, 12(547), R22–R25. Lambert, D. M., Emmelhainz, M. A., & Gardner, J. T. (1996). Developing and implementing supply chain partnerships. The International Journal of Logistics Management, 7(2), 1–17. Leonard, L. G. (1998). Primary health care and partnerships: Collaboration of a community agency, health department, and university nursing program. Journal of Nursing Education, 37(3), 144–151. Levinger, B., & Bloom, E (1997). Discussion-oriented organizational self- assessment. The Education Development Center and Pact, with assistance from the Office of Private and Voluntary Cooperation, US Agency for International Development. Lipsky, M., & Smith, S. R. (1989–1990). Nonprofit organizations, government, and the welfare state. Political Science Quarterly, 104(4), 625–648. Lister, S. (2000). Power in partnership? An analysis of an NGO’s relationships with its partners. Journal of International Development, 12(2), 227–239. Luhmann, N. (1988). Familiarity, confidence, trust: Problems and perspectives. In D. Gambetta (Ed.), Trust: The making and breaking of cooperative relations. Oxford: Basil Blackwell. Lukes, S. (1974). Power: A radical view. London: Macmillan Press. Malena, C. (1995). Relations between northern and southern non- governmental development organizations. Canadian Journal of Development Studies, 16(9), 7–29. Murphy, D. F., & Bendell, J. (1997). In the company of partners: Business, environmental groups and sustainable development post-Rio. England: Policy Press. Norwegian Agency for Development Cooperation (NORAD) (1989). Some planning and evaluation strategies. Guide to planning and evaluating NGO projects. Part I: Principles and policies of development assistance. Oslo: Author. Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge: Cambridge University Press. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 230
  • 17. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed). Thousand Oaks, CA: Sage Publications. Peterson, D. J (1997). The NGO/donor workshop: Highlights of the discussion. In E. Klose & I. Hunt (Ed.), NGO/Donor workshop, Szentendre, May 12–14, 1997: A summary report. Szentendre, Hungary: ISAR: Clearinghouse on Grassroots Cooperation in Eurasia and the Regional Environmental Center for Central and Eastern Europe in collaboration with ECOLOGIA and the Environmental Partnership for Central Europe, with support from the US Agency for International Development; the Environmental Ministries of Austria, Finland, and the Netherlands; and the World Bank. Poulin, M. E., Harris, P. W., & Jones, P. R. (2000). The significance of definitions of success in program evaluation. Evaluation Review, 24(5), 516–536. Provan, K. G., & Milward, J. B. (2001). Do networks really work? A framework for evaluating public-sector organizational networks. Public Administration Review, 61(4), 414–423. Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 86, 81–92. Rendon, L. I., Gans, W. L., & Calleroz, M. D. (1998). No pain, no gain: The learning curve in assessing collaboratives. New Directions for Community Colleges, 103, 71–83. Schonberger, R. J. (1996). Backing off from the bottom line. Executive Excellence, May, 16–17. Shah, J., & Singh, N. (2001). Benchmarking internal supply-chain performance: Development of a framework. Journal of Supply Chain Management, 37(1), 37–47. Squire, L. (1995). Evaluating the effectiveness of poverty alleviation programs. New Directions for Evaluation, 67, 27–37. Tonkiss, F., & Passey, A. (1999). Trust, confidence and voluntary organisations: Between values and institutions. Sociology, 33(2), 257–274. Whipple, J. M., & Frankel, R. (2000). Strategic alliance success factors. Journal of Supply Chain Management, 36(3), 21–28. J.M. Brinkerhoff / Evaluation and Program Planning 25 (2002) 215–231 231