This section outlines basic community evaluation approaches, such as site analytics, online surveys, and focus groups, for collecting and analyzing evaluation information to inform effective management and facilitation of online communities of practice. Specifically, the evaluation approaches described in this section can be used to answer basic questions, such as the following:

  • What are members doing in the community?
  • What are the popular trends in posts? Blogs? Forums?
  • What resources are being used?
  • What are the ongoing practices and processes that contribute to the “life” of the community and keep members engaged?
  • How is knowledge being shared within the community? Beyond the community?
  • Are leaders or roles emerging in the community? In what ways? How are they being cultivated?
  • How are members being supported in the community?
  • How are members contributing? Posting? Replying? (When? How often?)
  • What are the prevalent patterns of interactions?
  • How much of members’ online time is spent connecting to others in the community (e.g., reading and/or posting in forums, attending webinars)?
  • What are members’ technical issues?
  • What are members’ FAQs?

Site Analytics

At the most basic level, community leaders should evaluate traffic and popularity trends in their online communities of practice. Many Web-based platforms have internal analytics capabilities and tools to accomplish this task. If not, there are several free site analytics applications such as Google Analytics, Crazy Egg, FireStats, and Yahoo! Web Analytics that community leaders can use for this purpose. In addition, site analytics provide an extended picture of trends in the community over time. This information assists community leaders in anticipating members’ use and demands along with shifts in their viewing and posting behavior. For example, leaders in the Teach for America online community use Google Analytics to identify the number of teachers participating by region and then send that specific information to the regional leaders so they can then pursue local strategies to increase community engagement with their teachers.

Community platforms themselves and their component parts (forums, blogs, etc.) contain a variety of raw metrics for use in evaluation, many of which analytics programs can be configured to capture as well, if they do not capture it directly. For example:

  • Total number of topics created (if the community has forums)
  • Total number of blog posts (if the community supports blogs)
  • Total number of photos or videos uploaded (if the community supports this function)
  • Total number of messages posted (across all forms of messaging the community supports—forums, blog comments, video comments, etc.)
  • Total quantities of other forms of user-generated content (bookmarks, lesson plans, reading lists, etc.)
  • Total participation in site polls and surveys

In addition, site analytics program can provide some potentially useful overall information about the community site:

  • Total visits—the total number of times the site has been accessed or visited
  • Unique visitors—the total number of different visitors the community has had, subject to some measurement limitations
  • Repeat visitors, percentage of repeat visitors—the number or proportion of visitors who have visited the site more than once (ever, or over some period of time)
  • Number of registered participants/members
  • Conversion rate—the percentage of unique visitors who become registered members
  • Total page views—the total number of pages (broadly defined, no longer limited to HTML pages in many analytics programs) at the site that visitors have accessed
  • Average page views per visit—a simple metric for the depth of engagement a site is providing
  • Average time per visit or session—also called average hold time, particularly useful for measuring engagement in sites that rely on elements that do not generate page views as they are used
  • Top landing pages, most requested pages, top paths—Top landing pages are the most popular places for participants to enter the site, “most requested” are the most popular pages in the site overall, “top paths” indicate visitors’ most frequent click paths or page sequences. All can be used to help optimize site navigation, prioritize development, or determine sections in need of change, pruning, or deletion.
  • Bounce and exit rates—Bounce rates tell leaders how many participants come to their site and exit after viewing only a single page. Exit rates tell how many leave the site after visiting a particular page. High bounce and exit rates can be indicators of design problems, though this is not always the case.
  • Top and total referrers—Top referrers tells leaders where their site’s traffic is coming from, which can be useful in determining relationship with other sites.

Finally, evaluation of community momentum is easy to evaluate with community or site analytics. Community leaders can answer questions about their community’s performance over time, such as the following:

  • What is the average number of new topics (where “new”= last 30–90 days)? Average number of new blog posts created, average number of comments? Average number of downloads and uploads (video, audio, lesson plan, etc.)?
  • What is the proportion of new topics that get 10+ replies? The percentage of new blogs that get 10+ comments? The percentage of (video, audio, lesson plan, etc.) uploads that get 100+ downloads or 10+ comments?
  • What proportions of new topics or new blog posts are un-responded to or uncommented on (an important measure of the responsiveness of the community, which in turn affects key factors like trust, especially for educators)?
  • Are the numbers of participants in new site polls or surveys growing?
  • What is the average new (topics, replies, blog posts) created per member?

In cases where communities, by default, display content that has been most recently added to/commented on (“latest activity”), it often makes sense to look at active topics or the most popular topics as well, not just to look at how the community responds to the latest topics created. For communities that allow the creation of groups, metrics like median group size, proportion of groups posted (or not posted) to in the last month/three months/six months, and proportion of groups with fewer than 10 (or 5) members should be regularly monitored to see whether tighter management of group creation or group consolidation is needed.

Managing Evaluation Approaches

It is easy to get overwhelmed with the variety of evaluation approaches and metrics available. One key is not to try to measure everything. Every community has—or should have—one or several primary forms of interaction or activities—for example, its forums, its blogs, its multimedia libraries. In general, the primary interactions or activities should be the focus of any evaluation approach. A secondary focus should be on tracking new community features/sections that have just been launched to make sure they are getting traction. Other interactions and activities can be monitored more periodically for example, to determine whether there are underperforming features that should be addressed, pruned, or eliminated.

A second key to keeping things manageable is to measure and review at intervals of time, not continuously. A monthly snapshot of activity is generally enough, and most site analytics programs are set up to provide monthly measures (in addition to total measures), for example visits per month, page views per month, page views or visits
this month. These types of monthly snapshots are also great ways to benchmark and track a community’s progress over time.

Online Surveys

Online surveys can simply and inexpensively query large numbers of community members about their feelings, opinions, experiences, and technical needs. Some of the advantages of online surveys include the following:

  • Direct query—Online surveys can allow leaders to ask participants more directly about issues and opportunities and get more direct evaluations of the community than site or community analytics can provide.
  • Segmentation—Online surveys can enable leaders to learn what is important, what is working and what is not, for different types of participants in their population.
  • Broader reach—Online surveys can be used not only to learn what community participants want and need, but also to find out what potential domain participants who are not yet part of the community might want and need.
  • Anonymity—Participants in online surveys may be more likely to tell leaders things they would not say in the community itself or in phone or in-person interviews.

Many community platforms have built-in tools that can be used to administer online surveys. If not, community leaders can choose among several free sites, such as Google Forms, Kwik Surveys, and QuestionPro to conduct online surveys. Other providers, like Survey Monkey and Survey Gizmo, offer a reasonable depth of survey functionality at relatively low cost (and often have free levels or free trials). There are also “low impact,” more versatile options like KISSInsights that allow you to ask a single question associated with or about each site page, and services like UserVoice and IdeaScale that let your members survey each other, proposing and voting on each others’ ideas for site improvements.

Here are some examples of the types of questions that online surveys can help answer:

  • What new features, functions, or activities would be of greatest interest to community members?
  • How has the community helped members solve problems of practice?
  • How can the community attract new members and encourage existing members to participate more actively?
  • What are the strengths and weaknesses of the community for different types of participants (e.g., participants with different demographics, different educational roles, different community roles—influencers, producers, raters/reviewers, networkers, lurkers)? What changes would have the greatest potential impact on each?
  • What technical difficulties or issues are members experiencing?

Some tips on developing and executing effective online surveys:

  • Survey length—Without additional incentives, most online surveys should not take respondents more than about 5 to 10 minutes to finish, and it is a good idea to test how long it really takes to complete by having a few colleagues take it before the survey is launched.
  • Setting expectations—An online survey should always tell respondents up front how long it’s likely to take to complete, and provide some form of resident progress meter on every screen so respondents can track how far they have gotten, and how much more they need to complete.
  • Varying question types—A good way to avoid respondent fatigue is to use a variety of question styles throughout the survey instrument.
  • Multipoint scales–Asking a series of related questions, each of which asks the respondent to rate something (or their agreement with a statement) on a 1–5 or 1–10 scale, is often a great way to collect a lot of useful data in a brief amount of time.
  • Skip logic—Skip logic, which automatically routes specific respondents past questions that don’t apply to them (on the basis of the way they answered previous questions), is a great way to increase the scope and depth of what a community survey can cover (as well as avoid annoying respondents with questions that are irrelevant to them).
  • Randomization—Randomizing the order of responses/choices a respondent sees for each question is often an important way to reduce order bias, although there are cases where a fixed order of responses is more appropriate.
  • Choices + “other” vs. open-end—The number of open-ended, essay-type questions in an online survey should be used judiciously. Although they can be useful to generate direct quotes that add resonance and additional dimension to quantitative results, open-ended questions limit how many questions the community can be asked. (Because they take longer for participants to complete—in fact, many participants, particularly less committed participants, will not respond to open-ended questions or will even drop out unless allowed to skip them.) If one is unsure of the possible responses or options to offer respondents to a particular question, leaders should consider brainstorming options with trusted members of their communities. When there is concern that a choice has been missed, consider including an open-ended “other (please specify)” option.
  • Soliciting interest in follow-up surveys—At the end of the survey, it is usually a good idea to ask respondents if they would be willing to be part of a follow-up survey or become part of an ongoing online panel for the community. And collect their e-mail addresses for this purpose (with a very prominent link to the community’s privacy policy on the screen). This can help enable communities to better conduct longitudinal surveys that track changes over time, provided response rates remain high.
  • Providing incentives—Providing a small reward of some kind is often an effective way to increase response rates, particularly to long surveys, and to increase the likelihood that a community member will participate. Small rewards often are more effective than one might expect. Consider awarding each survey respondent community points or badges that hold significance within their community. Or, if feasible, making respondents eligible to be randomly selected to receive a small prize (under $10 in value), such as a gift certificate for a widely used online vendor.
  • Number of respondents—The larger the number of participants, the easier it is to compare findings from subsets of the respondents and draw reliable conclusions about different groups.
  • RepresentativenessSurveys often begin with screeners that ensure that respondents meet certain minimum requirements to participate. With some additional programming, screeners can also be used to ensure that a sample is demographically representative or representative in other ways. Including questions about demography, educational role, community role, and community use to see how different kinds of members answer other survey questions is important. Among other things, this will allow leaders to get an accurate picture of how their members as a whole—not just its most active members (who are usually the most likely to participate in evaluations)—really feel about the community.
Sample Size

Do community leaders really have to look at every topic, every member profile, even all the content in their site to get at evaluation measures like the percentage of posts unresponded to or average number of replies per topic? The short answer is no. In general, community leaders can get reasonable approximations for these and other measures by taking random samples from their communities.

What is a reasonable number of items? In general, looking at, for example, 50–100 topics, or 50–100 profiles, or 50–100 messages or posts, ideally per month, should provide a manager with potentially useful information. A couple of more specific examples: Leaders might look at 50–100 topics to get a better sense of “hot topics.” Knowing the hot topics would then enable leaders to set up webinars or engage experts from the field to come into the community and generate some form of structured conversations that would enable members to increase their knowledge about these topics.

Another example: A scan of member profiles could help leaders determine how much information members are currently sharing that allows fellow members to know more about their areas of expertise. If members have not revealed much about themselves, the community manager might design some online events that provide opportunities for members to get to know one another better (e.g., creating opportunities for online collaboration to help increase expertise transparency).

Focus Groups

For focus groups, selected community members are invited to freely interact with other members around certain questions, resources, or activities. Focus groups create a permissive atmosphere where participants are encouraged to provide personal and conflicting viewpoints on their experience in the community. Focus groups help community leaders get a handle on members’ perceptions, beliefs, and attitudes, enabling them to gain better insight to members’ opinions, rationales, and experiences that are not easily captured in site analytics or online surveys. When well moderated, focus groups can lead to information that is greater than the sum of the individual contributions. For example, group interactions may reveal reasons that members choose not to post or reply in the community, and other issues or concerns may surface during the focus group discussion.

Since online communities of practice members may reside all over the globe, conducting online, as opposed to face-to-face, focus groups is highly recommended. There are several easy ways to carry out online focus groups. For example, many community platforms may offer chat or voice-over Internet protocol (VOIP) features that could facilitate focus groups. If not, community leaders can choose among several free sites, such as or DimDim or Mikogo to conduct online focus groups in virtual meeting rooms. Another alternative is a telephone conference call using a free tool like FreeConferenceCall or Skype. Questions similar to the ones suggested above for online surveys may be used to guide focus group’ discussion. Focus groups can also be asked to review prototype activities, draft resources, or other materials and respond about their value, ease of use, and other technical needs.

Here are some tips on creating and running effective focus groups:

  • The moderator is key—The moderator should be experienced at running these types of groups and ideally be independent of the community—though with at least some subject knowledge of what the community covers—so that the moderator has no inclination to push the discussion in any specific direction, but rather to encourage the focus group participants to convey all their relevant thoughts and recommendations.
  • Preparation is important—Valuable free-flowing conversations in focus groups are often the result of substantial advance planning by the moderator and the community team, who typically develop multiple iterations of a “moderator’s guide” in advance of the sessions.
  • Be mindful of the makeup of the focus group—In some cases, as the goal of the group dictates, it can be very important to have a group that is fairly homogeneous in role and experience (e.g., a group of principals) so that their insights can be more likely to build on one other; in other cases, more heterogeneous groups are more useful.
  • Careful attention to group dynamics is neededTo ensure that the discussion is a group process, care must be taken to enable everyone in the group to be an active participant (to the extent this is reasonable and possible) and that particular individuals do not dominate, steer, or run away with the conversation.
  • It can be an interactive processModerators and community leaders should plan the timing of focus groups so that there is enough time to debrief between them and make adjustments from group to group—it is not necessary that each group cover the same ground in the same way.

So far, the least costly and most basic of evaluation approaches have been discussed. The next section of this document, Advanced Community Evaluation Approaches, provides a number of ideas for more complex evaluation strategies.

Privacy and Data Collection

Conducting evaluation of online communities of practice raises issues of privacy for community leaders to carefully consider. Who will see this information? How will it be used? Does it protect against individuals information and contributions being identifiable? Does it help cultivate the community or does it create distinctions among members? Most of the basic evaluation approaches discussed in this brief are voluntarily and/or do not involve members’ identifiable information (e.g. aggregate site analytics). However, community leaders should have a clear and prominent privacy policy and terms of use/service agreement (e.g., that members agree to when signing up) within the community. Some sample privacy policy and terms of service agreements can be found here. In general, community leaders should do the following:

Use numerical averages or aggregates wherever possible (e.g. “the average topic in our forum gets x replies”, “10% of the posts have [characteristic]”);

Avoid quoting directly from community content (e.g., from a members’ post) or from individually identifiable survey or focus group responses in any report, unless specific permission has been obtained from the community contributor.

Leaders of communities that include K–12 students and/or display their work should be aware that there are strict regulations on collecting personally identifiable data of any kind from children under 13 (see How to Comply with COPPA and the Department of Education’s Family Policy Compliance Office for more information).

Next: Advanced Community Evaluation Approaches

2 Responses to Basic Community Evaluation Approaches

  1. We are a group of volunteers and starting a new scheme in our community.
    Your site offered us with valuable info to work
    on. You have done a formidable job and our whole group will be thankful to you.

    Thumb up 0 Thumb down 0

  2. our website says:

    Is really attention-grabbing, You are an too much qualified writer. I have signed up with ones rss feed and appear to trying to get much more of your own fantastic posting. Moreover, I have contributed your internet site at my social networking sites

    Thumb up 0 Thumb down 0

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>