Basic metrics and methodologies can only answer questions to a certain degree. For example, simple measures like page views/visit or hold times can be strong indicators of member engagement, but can also be an artifact of poor or confusing site design. Determining engagement more definitively often requires more sophisticated tools. This section provides advanced evaluation approaches that can be used to answer more in-depth questions such as:

How well is the community doing as a community? Which dimensions of a successful community are being achieved in the eyes of members (such as reducing feelings of isolation)?

What is each community element contributing to the community as a whole?

How is design of the community impacting member use and participation?

What are the emerging benefits of the community for members?

What are the emerging cultural norms or themes of the community?

What kinds of questions are being asked during synchronous activities (e.g., webinars)?

By their nature, online communities provide extraordinarily rich sources of data. A great deal of the data that online communities generate is publicly available, meaning that conclusions drawn from one can be relatively easy to verify by others, and that community leaders can often easily compare how they’re doing with other communities.

Evaluation of Impact on Practice

Some forms of impact are easier to measure than others. Leaders can measure/demonstrate dissemination impact, the impact of their community on other communities, for example, by:

  • Tracking in-bound links (using standard site analytic tools like Google Analytics)
  • Tracking aggregate outbound ‘sharing’ activities (such as ‘send to a friend’ or ‘share on Facebook’) of community content by community members.
  • Doing global searches on phrases contained in its most popular pieces of content to see how many other sites have “picked them up.” Services like Social Mention and Addict-o-matic can also help with this, as well as potentially help leaders understand how others perceive their community.

In the end, at least some effort should be made, if possible, to evaluate impact on actual educator practice and/or on student learning. Some of this can be done through online surveys that query community members about:

  • Changes in practice or student performance since they started using the community;
  • Attribution of practice changes to the community.

Impact on practice can also be assessed through longitudinal surveys where a group of community participants is queried periodically over time about practice and student performance. Because proving these impacts can be difficult and expensive, leaders who really want and have the means to definitively measure impact on practice and student learning, should, in most cases, focus in on one or two indicators/results that:

  • Can most clearly be tracked back to the community;
  •  If proven, imply a number of other potential impacts;
  • Allow for meaningful quantifications of impact when applied to the community as a whole.

If at all possible, the study should include a control group of educators who have never used the community (and agree not to throughout the study).

Leaders who lack the resources to carry out such research may find it useful to review research literature looking for studies that show the impact of specific online communities that they can relate back to their own communities and experiences (See For More Information section).

Benchmarking

How is a community progressing over time? How is it doing as compared to other communities? Because the development of the Internet and online communities is so dynamic, evaluation approaches are still being codified and standardized, and funders/sponsors are still making up their minds about what criteria are most important, the answer in many cases at this point is: “it depends.” But there are still several ways to use evaluation approaches to get a better sense of how a community is doing:

Benchmarking against self—Taking monthly snapshots of the user base and their rates of activity provides a natural way for leaders to track performance over time, show growth and progress, and flag potential problems before they become serious.

Benchmarking against others—Much of the data about a community (e.g., registered participants, number of topics, number of replies/topic) is often available about other communities, provided they are not closed to outsiders. It often makes sense to pick 2–3 other communities that might be considered comparable in purpose or audience and do at least some of the same monthly measures on them for purposes of ongoing comparison.

In addition, at least some site analytics programs (e.g., Google Analytics) now provide benchmarking tools that allow community leaders to compare their performance in a variety of areas to all others of the same size and/or category using the program. Services like Compete and Alexa can help make at least some high-level comparisons as well. Websitegrader uses Alexa and other resources to grade sites (and their competitors) on a variety of more granular parameters (e.g., readability, image use, metadata).

Evaluation of Community Cohesion and Connectedness

More ambitious community leaders can move beyond basic evaluation approaches to more complex social network analysis, which can reveal important facets of the community. The purposes of social network analysis include describing the interactions and relations among participating members, identifying the patterns of interactions, and tracing how information flows within the community. Community leaders could use computer log files and forum discussion posts along with an easy to learn social network analysis tool such as NodeXL, developed by the Social Media Research Foundation, to visualize interaction patterns of social processes experienced by community participants. A list and description of other leading social network analysis software packages can be found here.

The value of a social network analysis approach is the ability leaders have to ask and answer questions that focus on community relationships. Statistics about community participation can provide important insights about the engagement of a community, but can say little about the connections between community members. Social network analysis can help explain important social phenomena such as group formation, group cohesion, social roles, personal influence, and overall community health.

These can be particular key measures of the present and future health of a community, particularly one made up of front-line educators, who want, need, and expect communities to reduce isolation and disconnectedness and/or to receive quick responses to their questions. Community leaders can use social network analysis to answer questions such as:

  • What is the average number of “friends” or “colleagues” that community members list or have collected in/on their member profile pages?
  • What is the community’s network density? (e.g., the number of “friends” or “colleagues” each member has as a function of the total size of the community).
  • What are the patterns of interaction among the community’s members?
  • Who are the important members in the community?
  • Who are the important member sub-groups in the community?

Evaluation of Community Ownership, Quality, & Trust

Community leaders can use various evaluation approaches to “take the temperature” of desirable community dimensions, such as ownership, quality (e.g., community content and dialogue), and trust.

Community Ownership. Online community veterans have found that a time when its members “take over” and “make the community their own” is a key landmark in the long-term community development process. Depending on how a community is configured, it should be possible to evaluate “participant ownership” by taking samples of its membership to determine:

  • The proportion of members who have made return visits to the community;
  • The proportion of members who have make five or more return visits/month;
  • The proportion of members who have contributed content to the community;
  • The proportion of members who have engaged in collaborative community activities;
  • The proportion of members who have taken on leadership responsibilities of some kind within the community, which can range from management or moderation to (informally) regularly responding to queries related to an area of expertise.

Community leaders should be able to use their community’s site analytics to capture the information for the first two bullets. For the remainder, gauging the proportion of members who’ve contributed content (and to some extent, the proportion that have assumed leadership responsibilities) is clearly easier for communities that allow content searches by username.

Community Quality. As communities strive to demonstrate impact on practice, evaluation of community quality becomes increasingly important. Community leaders may find it useful to attempt to measure:

  • The proportion of topics or replies that specifically relate to practice;
  • The proportion of replies where links to potentially helpful resources or other referrals are provided;
  • The proportion of replies to a post in which helpful or constructive advice is directly provided;
  • The proportion of replies that build on previous posts (as opposed to just responding to the original poster);
  • The proportion of replies that contain offers of collaboration or introductions to potential collaborators;
  • The proportion of replies that contain creative, novel, or innovative ideas;
  • The proportions of replies that summarize, distill, or synthesize prior posts/replies.
  • For ambitious community leaders, there are a wide variety of advanced evaluation approaches when looking at quality, such as online content analysis methodologies like the Interaction Analysis Model (IAM) (see For More Information section). Community leaders may want to initially use more general content analysis tools such as Wordle, Leximancer, and/or Tagul.
The Importance of User Content Rating Tools

One simple way to measure community quality is by user content ratings. A user content rating tool can:

Be a practical approach in helping community leaders decide what content and/or activities to continue with and what content is beneficial to their membership

Give educators who are not comfortable with posting or writing a simple way to contribute to the community.

Motivate more quality contributions from top contributors, particularly if average ratings for each contributor are used to help determine their status in the community in some way (see e.g. “Recognizing and Rewarding Individual and Team Contributions,” p. 18 in Connected and Inspired).

Given how busy educators are, getting content consistently and appropriately rated in most education communities is likely to also require implementing reputation and recognition systems for critics/raters (like Amazon’s “Top Reviewer” system).

Community Trust. Members of an online community need to feel safe. To the extent that members trust in the community as an institution and trust in individual members of the community directly impacts their willingness to engage in knowledge sharing interactions. Trust is complex and difficult to measure under any circumstances. That said, by taking ‘core samples’ from their community’s discussions, it is possible for community leaders to develop useful evaluation approaches to approximate the level of trust in their communities. For example, community manager may look at:

  • The proportion of posts in which community members show or express vulnerability, such as a lack of domain knowledge.
  • The proportion of posts in which community members share personal stories;
  • The proportion of posts in which community members are (emotionally) supportive or helpful to other members.

Findings from these types of evaluation approaches, when combined with others (such as focus groups), can aid community leaders in building and maintaining trust in their community.

Sharing Metrics: A Vision

Like online communities themselves, community evaluation is still at an early stage of development. One possibility to advance the field: an online repository and distillery for metrics research, both inside and outside education, in which community leaders and professional researchers would work together to:

Collect, aggregate, and synthesize online community data (leaders would collect/submit, researchers would aggregate and synthesize).

Iteratively develop and refine (through cycles of use and feedback) metrics appropriate for community leaders to use.

Develop, from aggregate data, real-time benchmarks in a variety of areas for community leaders to measure their own performance.

Create regularly updated briefs, using aggregate data and developing metrics that point to new community innovations and more effective ways to address online community tensions and issues.

In the interim, community leaders should consider sharing their findings with each other to the extent to which they’re comfortable doing so. There’s a lot of potentially vital information to be collected, and given the diversity of communities and information collected, what’s put together and analyzed is likely to be much greater than the proverbial sum of the parts.

 Next: Summary

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>