Jump directly to the Content
May 28, 2017Missiology

Church Planting Metrics: Measure What’s Important (Part Two)

Measure outcomes, not activities.
Church Planting Metrics: Measure What’s Important (Part Two)

Read Church Planting Metrics: Measure What’s Important, Part One (The Problem of Measurement Inversion; Defining and Measuring “Healthy Church”; Is Your Organization Suffering from Measurement Inversion?).

Define the Object of Measurement (i.e., Church)

As mentioned above, the end goal for church planters should be a biblically healthy church. The target “biblically healthy church,” however, is too vague to be observed and quantified. The design team needs to take the target and deconstruct it into sub-targets and observable indicators.

Sub-targets are the components that characterize biblically healthy churches. They don’t describe church planting activities, but rather results of those activities. The list of sub-targets, when taken together, should be an accurate description of a healthy church without going beyond the biblical definition of church. The design team prepares a draft list of sub-targets, which they revise and rewrite as they receive input from leaders and practitioners. To keep the list of sub-targets manageable, the final list should be as short as possible (e.g., seven or fewer sub-targets).

The following questions can serve as a guide for the design team as they define sub-targets:

  1. Is the sub-target a description of an outcome (rather than an activity)?
  2. Is the sub-target an essential and irreducible component of a healthy church?
  3. Taken together, do the identified sub-targets comprise an adequate description of a healthy church, or are other components still missing?
  4. Does any sub-target go beyond what is biblical (i.e., do they reflect the organization’s traditions or cultural idiosyncrasies)?

One can think of sub-targets as a success checklist; when the condition is achieved, we check the corresponding box. Sub-targets, however, are not observable in a way that shows progress toward their achievement (Gohl 2003). For this reason, the design team needs to define at least one observable indicator for each sub-target.

An indicator is “the exemplary, concrete description of an essential feature of a sub-target” (Gohl 2003). A helpful starting point for defining indicators is to filter each sub-target through a series of questions and determine which answers identify its essential features. The key questions are: who, what, when, where, how, and how much/many? Not all six of these questions will be equally helpful for every sub-target. Focus only on features that provide information that affects decisions (Hubbard 2007, 96).

In other words, when these questions are posed, which answers affect or inform how a church planting team would use its resources? Take, for example, the sub-target believers are discipled toward maturity. The question how could have many answers, but the actual methods, material, or program design are not essential since they focus on inputs rather than outcomes. The essential feature in this case is defined best by the question what (i.e., we see believers increasingly demonstrate love for one another, spiritual hunger, the fruit of the Spirit, etc.).

Quantifying the Indicators

The design team should decide on the best method for scoring each indicator. When data are quantitative, scoring the indicator is relatively simple. For an indicator like “There are at least two elders,” you simply count. When quantitative data are expressed in homogenous units (e.g., euros), ratios can be calculated. For example, to measure the indicator “church operations are funded by local contributions,” compare total operational expenditures and income from local sources to determine what percentage of costs are covered by the church.

When indicators do not yield quantitative data, yet can be observed to varying degrees, it is best to rate the indicator on a scale. This is a bit trickier, since all qualitative measurements are subjective and depend on scorers’ judgments or opinions.

It is important, therefore, to design scales that provide consistent ratings from different raters. First, define what is meant by each of the key terms. Second, design a rubric, a set of criteria that raters will use to determine how an item should be scored. The rubric in Figure 1 is designed for the indicator, “There are biblically qualified elders.” It is based on the biblical qualifications for elders found in 1 Timothy 3:2-7. The rubric defines what a rater needs to observe in church elders in order to rate them accurately and consistently. Third, ensure that raters are trained to use the rubric.

5 = clearly evident, 3 = sometimes evident, 1 = not evident
(binary characteristics are bold and must be scored either 5 or 1)

  • Is the elder the husband of one wife?
  • Is the elder temperate and respectable?
  • Is the elder hospitable?
  • Is the elder able to teach, either in a group or one-on-one context?
  • Is the elder free from addiction?
  • Is the elder gentle and peaceable?
  • Is the elder free from the love of money?
  • Does the elder manage his own household well, keeping his children under control?
  • Has the elder been a believer for more than six months?
  • Does the elder have a good reputation with those outside the church?

Even with these guidelines, it is often difficult to decide which measurement method is best for each indicator. For example, should an indicator like “believers are discipled toward maturity” be measured on a scale, by a yes or no answer, or by calculating the percentage of church members who are actively being discipled? The following questions may help when deciding which measurement method should be used:

  1. Which method best reduces uncertainty regarding the indicator’s essential feature?
  2. Which method provides higher information value (i.e., it informs the decisions of the church-planting team)?
  3. Which method is most likely to produce the same response when scored by different observers?

Avant Ministries’ Measurement Instrument

In 2002, Avant Ministries began to work through a process similar to the one described in this article. It was a two-year process in which leaders and church planters gave input to each draft of sub-targets and indicators. In retrospect, the leaders at Avant recognized that the process itself was valuable and ensured that the instrument would be sound both in theory and practice.

Additionally, it helped church planters buy-in to the new instrument. The following summary of Avant’s measurement instrument is, consequently, to be treated as an example of an outcomes-measurement instrument rather than an instrument to be adopted by other church planters.

Figure 1 displays the deconstruction of the target church into five sub-targets and thirteen indicators. The thirteen indicators are scored using a 36-question survey that directs scorers to count, rate, or classify the indicators. The graph in Figure 2 displays the overall achievement of the target and the relative strength of the thirteen indicators. Note that the graph does not dictate a ministry plan; rather, it gives a snapshot of the current health of the church plant. The team can then use their knowledge of the ministry context to decide how best to focus their efforts to strengthen the weak areas of the church.

Figure 1

Figure 2

Final Reflections

I have worked on two church planting projects with Avant Ministries. The two projects had a number of similarities. Both were set in predominantly Catholic Europe, where church planters were seeing very little fruit. My co-workers in both projects were gifted and passionate about seeing the church established. Yet one project was substantially more successful than the other. I believe it was largely due to the fact that in the successful project the team had a clear definition of church and an instrument for measuring progress toward the goal.

The clear goal and constant measurement kept us from straying from our mission, and guided our decision making. Most importantly, it gave us a glimpse into what God was doing, which caused us to take bigger steps of faith than we otherwise might have taken.


  • Breslin, Scott. 2007. “Church Planting Tracking and Analysis Tool.” Evangelical Missions Quarterly 43(3): 508-515.
  • Corwin, Gary. 2005. “Church Planting 101.” Evangelical Missions Quarterly 41(2): 142-143.
  • Deyneka, Peter. 1999. Omega Course: Critical Church Planter Training: Manual Four. South Holland: Bible League.
  • European Union Joint Evaluation Unit. 2006. Evaluation Methods for the European Union’s External Assistance: Methodological Basis for Evaluation Vol. 1. Luxemburg: Office for Official Publications of the European Communities. Accessed February 1, 2013, from ec.europa.eu/europeaid/evaluation/methodology/examples/guide3_en.pdf.
  • Gohl, Eberhard. 2003. Checking and Learning: Impact Monitoring and Evaluation, a Practical Guide. The Association of German Development NGO’s, reg. ass., (VENRO). Accessed February 1, 2013, from www.sle-berlin.de/sleplus/files/Checking%20and%20learning.PDF.
  • Hubbard, Douglas. 2007. How to Measure Anything: Finding the Value of Intangibles in Business. Hoboken, N.J.: Wiley and Sons.
  • Walker, Philip. 2005. “The Transition from Church Growth to Church Health.” Journal of the American Society for Church Growth 16: 3‐13.
  • Warren, Rick. 1995. The Purpose Driven Church. Grand Rapids, Mich.: Zondervan.

Copyright © 2013 Billy Graham Center. All rights reserved.

The Exchange is a part of CT's Blog Forum. Support the work of CT. Subscribe and get one year free.
The views of the blogger do not necessarily reflect those of Christianity Today.

More from The Exchange

Christianity Today

Church Planting Metrics: Measure What’s Important (Part Two)