Sign up to our newsletter:

Evaluation in Botanic Gardens—Luxury or Necessity?

Number 21 - December 2000
E. Beckmann






Why is evaluation important in botanic gardens? Essentially, for the same reasons that evaluation is important to any agency; because it answers questions such as ‘What about ...?’, ‘What would happen if ...?’ , ‘Would this work?’, ‘How can we make this better?’ and ‘Did it work?’. Most importantly, evaluation is the only key to the most important question of all; ‘If we don’t know where we’ve been and where we’re going, how will we know when we get there?’

So why do many botanic gardens, especially the smaller ones, regard formal evaluation as a luxury, to be engaged in only rarely, if at all? And why is even informal evaluation (for example, acting on verbal feedback from visitors) sometimes considered unnecessary (Beckmann 1988; Sutherland 1996)?

What Does Evaluation Involve?

At its most basic level, evaluation involves the collection of information; the analysis of this information in context; the reporting of both the results and their implications; and, arising directly from those implications, recommendations for action. In botanic gardens, evaluation is especially important in ensuring that educational and recreational facilities and services meet visitors’ needs as well as management objectives.

Most commonly, because it is relatively easy to collect, evaluation data is number-based or quantitative (i.e. statistically documented), giving rise to indicators such as visitor numbers and satisfaction ratings. But even such simple visitor information, such as numbers and origins, can provide useful planning information. For example, the 87 hectare Kebun Raya Bogor (Bogor Botanic Garden, Indonesia) had more than 1.3 million visitors in 1995, with a weekly average of more than 25 000 people (KRI 2000). Most of these visitors, more than 1.26 million annually, were Indonesians on day trips from Jakarta. The remaining 61 000 were international tourists, about half from The Netherlands.

While this kind of information is very basic in terms of evaluation, it can be extremely useful in helping managers prioritise visitors’ needs. Even such simple statistics may help determine the need for internationally-recognised symbols or multi-language labels, show whether more emphasis should be given to the expectations of local visitors rather than international ones, or indicate the potential for user-pay guided tours in specific languages. Collecting reliable visitor statistics is thus the very first step in an effective visitor research and evaluation program that feeds into master planning processes.

The next step involves awareness of trends and patterns in visitation. Again using the Bogor example, Sundays in 1995 were the busiest days of the week, averaging 15 000 visitors, but on national holidays visitation could easily reach 40 000 (KRI 2000). Awareness of such trends has major implications for decisions related to site hardening, staffing, security and interpretation.

Of course, visitation trends in outdoor environments are prone to the impacts of weather, on both a daily and a seasonal basis. At the Holden Arboretum (Mentor, Ohio, USA), long-range planning in the late 1980s included investigating seasonal differences in visitor populations. The findings showed substantial differences: in Winter, for example, about 80% of visitors were Arboretum members, with few people visiting for the first time, while in Summer and Fall more than half the visitors were non-members, with many being new to the arboretum (Hood 1988).

However, quantitative detail alone rarely provides all the clues needed in an evaluation puzzle. The missing information, about staff and visitor perceptions, values, attitudes, needs and affective (emotional) responses, is the qualitative data that comes from people giving their opinions and responses in their own words. Most effective are mixed method evaluations, which combine quantitative and qualitative techniques. Also vital, however, is the timing and context of the evaluation, as this determines the kind of information collected and the uses to which it can be put, as described in the next section.

Types of Evaluation

Audience research is an essential component of most evaluation, because it must be the visitors, the audience, who are the ultimate judges of the effectiveness of botanic garden facilities and services. Understanding who these visitors are and what they are like, their needs, motivations, expectations, attitudes and recreational behaviours, is therefore a key preparation for effective planning and implementation. For example, understanding visitors’ motivations and expectations (why they come and what they expect to see, find and do) makes it easier to provide appropriate facilities and services. But each botanic garden must understand its own audience. Do residents in urban areas visit gardens for the same reasons as people from rural areas (Bennett 1995)? Do visitors expect the same from a botanic garden in a capital city as they do from one in a small town? How important are cultural differences in visitors’ motivations and interests? Should a botanic garden be satisfied with mostly local visitors or should it market itself to a broader visitor population? What environmental and horticultural messages are appropriate for different gardens and different visitor populations?

Front-end evaluation answers the early ‘What about ...?’ and ‘What would happen if ...?’ planning questions, allowing preliminary ideas to be discussed by the kind of people most likely to be affected by them. For example, the manager of a regional botanic garden may wonder whether to continue maintaining a display of exotic international species. Front-end evaluation would enable broad exploration of this idea with people who would be affected by such a decision, from the horticulturalists and volunteer guides to the regular visitors and local schools. Understanding the likely reactions of all the stakeholders before significant decisions are made, or major costs incurred, can prevent much heartache, recriminations or expensive errors.

Formative evaluation, or trialling, answers the ‘Will it work?’ planning questions by testing specific services with their specific audiences. For example, an educational worksheet for 6 to 10 year olds may have been designed for visiting school groups. Formative evaluation would mean testing the worksheet with such visitors, to ensure that the language is appropriate, that the children understand the directions to different parts of the gardens, and that the content is interesting. When formative evaluation is included in the development of new facilities or programs, problems can be detected early, before 50 000 copies of a guidebook are printed, or expensive signs are finalised, or four-hour guided tours are scheduled. Mistakes rectified during trialling leave few scars, but when there is no trialling mistakes can prove very costly and embarrassing. Even big structures or facilities can and should be trialled, at least in model format (consider the recent embarrassment in my own city where a new airport ‘Welcome to ...’ sign was built facing away from the airport, so that it was readable only by people when they were departing the city!).

Remedial evaluation, or monitoring, answers the ‘How can we make this better?’ questions, enabling existing facilities and services to be kept at their peak of effectiveness. Whether considering the cleanliness of toilets, the value-for-money of an entry fee, the ease with which visitors can find and read plant labels, or the effectiveness of an interpretive sign in communicating an environmental message, the aim of remedial evaluation is to find out whether something is meeting its intended objectives, and if not, how to fix it so that it does. Time is therefore an important component in remedial evaluation; the monitoring must occur frequently and take little time, the results must be available shortly thereafter, and the implications of the results must be able to be acted upon very quickly.

Finally, summative evaluation provides the answers to ‘Did it work?’ At the end of a specific exhibition in the visitor centre, or of a ‘Dance in the Gardens’ program, or of a special childrens' summer guided tour program, it is valuable to analyse in detail the relevant objectives, achievements, costs, profits, and feedback from staff and visitors. On the basis of such an evaluation, for example, a botanic garden might decide to run a specific program again only if outside sponsorship can be obtained to defray costs; or to take on twice as many guides next summer because demand outstripped supply; or to put more emphasis into the environmental education program because visitors showed little interest in the photography course.

Considering the Context

Naturally, these evaluation processes must occur in context. For example, anyone considering seasonal visitation in a botanic garden must consider the impact of the growth and flowering habits of the garden’s plants and habitats. Peaks of visitation may occur at any time, even winter, if a particular species or habitat attracts attention at that time. In the Australian National Botanic Gardens in Canberra, for example, the Rainforest Gully is always extremely popular in summer, not only because of its natural beauty and interesting species, but also because it provides a wonderfully cool respite from the dry summer heat (often above 35ºC) being experienced elsewhere in the gardens. Awareness of such effects allows interpretive programmers in botanic gardens to capitalise on normal human behaviour by scheduling guided tours at appropriate times and places.

Audience research often feeds into front-end evaluation. For example, the Holden Arboretum study showed that specific organised activities (such as hiking or bird watching) were of interest only to specific visitor groups in certain seasons, with most visitors preferring to learn about plants ‘casually’ (Hood 1988). This understanding came from using standard visitor survey methods, with 569 visitors providing information on their demographic attributes (age, gender, marital status, education, occupation and residence) as well as psychographic characteristics (attitudes, values, opinions, expectations, and levels of satisfaction with different aspects of the visit). Having this range of data allowed Hood (1988) to examine links between specific demographic attributes and visitors’ motivations in visiting the arboretum, especially in terms of Hood’s previous research which had identified six important aspects of leisure activities (being with people, doing something worthwhile, feeling comfortable, having a challenge, learning, and participating actively).

Awareness of the value of detailed audience research for master planning prompted a year-long systematic visitor survey at the Chicago Botanic Garden (CBG, owned by the Forest Preserve District of Cook County), the USA’s second most visited botanic garden. More than 2000 adult visitors provided information on their visit frequency and motivations, interest in special events and interpretive programming, perceptions of the garden, knowledge of plants, interest in gardening, and other leisure pursuits (Hood & Roberts 1994). In this case, respondents’ age proved a useful distinguishing attribute. Visitors aged more than 55 (about 40% of the CBG audience) preferred audiovisual presentations, tour guides, and staff members to answer questions, and tended to be more interested in structured programming. Conversely, visitors aged 18 to 34 (about 20% of the CBG audience) liked the family discovery activities and hands-on exhibits, but tended to be seeking a casual (unprogrammed) experience. The CBG used their visitor data to establish audience and programming priorities, and decided to target the younger audience by developing programming and spaces for family groups (Hood & Roberts 1994).

Balancing Your Findings with Your Mission

It is important to remember that the results of audience research must always be considered by botanic garden managers in the light of their own goals, as indicated by two examples.

At Otari Native Botanic Garden (Wellington, New Zealand), a survey in Summer 1995 found that most people visited because they enjoyed walking in the bush (66%), having opportunities for relaxation and tranquillity (55%), walking their dog (28%), physical exercise (28%) and picnicking (12%). About one third of those surveyed were first-time visitors, but a similar proportion were very frequent users, visiting at least once every two weeks. Most visitors came at weekends, and about a quarter were accompanied by children. While almost a fifth of the visitors were essentially neighbours of the botanic garden, living within 300 metres, overseas visitors were also represented in small numbers. Word of mouth was clearly the most important influence in attracting people to Otari, and visitors were generally very satisfied with the facilities, although they wanted more plant labelling.

Here then is a typical visitor profile for a ‘community’ botanic garden, where the recreational needs of the local clientele are dominant and largely satisfied. Should the managers sit back complacently? What about the garden’s communication role? Although the survey showed that 82% of visitors were aware of Otari’s special role in the cultivation and preservation of New Zealand’s native plants, the findings about visitors’ communication needs were unclear; 48% of visitors wanted to know more about the collections, but 52% did not. How should these findings be considered?  In the Otari case, even though the ‘majority’ of visitors wanted no more information, the evidence was clear that at least half the visitors would be potentially receptive to additional interpretation. By using formative evaluation techniques to investigate the precise kinds of information wanted, and by creating effective interpretation suited to the community profile, the managers could work to communicate more subtle messages about native plant conservation without affecting Otari's primary popularity as a recreational area.

The second example illustrates a common use for audience research; to help identify new services or facilities that will meet visitor needs that are currently unrecognised by the gardens’ managers. When 209 visitors at Albury Botanic Gardens (New South Wales, Australia) were asked what other facilities they wanted, about 36% identified ‘picnic tables’.

At first, this appeared to be a relatively simple and inexpensive facility to provide to increase visitor satisfaction. However, the Albury managers asked themselves whether providing picnic tables was consistent with retaining the essential character and purpose of the Albury Botanic Gardens. Picnic tables would introduce a new element into the lawn areas that could easily result in visual clutter, as well as reducing the effectiveness of the lawns as a flat and open contrast to the rich textures of the garden beds and the vertical accents of the feature trees. The movements of visitors around tables would increase lawn wear, as well as making mowing and maintenance more difficult. While bench seating and lawn picnics had always been a feature of the relatively historic gardens (established in the late nineteenth century), tables were not part of this historic character, nor part of the tradition of other Australian botanic gardens of a similar age. Visitors who really wanted picnic tables could find them readily available in a park very close to the Gardens. Taking all these factors into consideration, the Albury managers decided not to provide picnic tables within the Botanic Gardens themselves, regardless of their visitors’ apparent desires.

When audience research or evaluation identifies an apparent clash of visitor and management expectations, the most effective outcome is either to meet visitors’ expectations (for example, by providing the required service) or to explain why the service is not being provided. Visitors at Albury will no doubt be more than happy to remain without picnic tables as long as the reasoning is explained (most appropriately through on-site interpretation).

Evaluation on a Smaller Scale

While broadscale audience research and evaluation are essential, sometimes what is needed is the evaluation of an individual style of communication or programming. For example, many botanic gardens use interpretive signs or brochures to help visitors understand more about relevant plants, habitats, horticultural, environmental or cultural issues. Usually, such signs or leaflets are ‘self-guiding’, in the sense that they act as a major, if not the only, form of communication between the visitor and the staff. Obviously the way in which such signs or leaflets are presented, their graphic design, colour, language, writing style, use of pictures or diagrams etc., are crucial to the effectiveness of communication with the audience. Evaluation can help in these specific cases as well. While there is already much useful general research on communication techniques, it is often important to re-examine these findings with specific audiences, topics or design styles. This is especially important when a botanic garden is producing a Sign Manual or Publication Guidelines to establish precise standards for aspects such as font size and type, colours, writing style and other design elements.

Again in the Chicago Botanical Garden, Korn (1988) studied the effectiveness of self-guided interpretive brochures in the Japanese Garden. Effectiveness was measured by how much visitors increased their knowledge (‘learning’) about Japanese gardens. Adult visitors were given either a ‘declarative-style’ brochure (providing statements of facts) or an inquiry-style brochure (encouraging visitors to ask questions and find out answers for themselves, for example by careful observation), or not given any brochure at all (the ‘control’ group). While visitors who had been given a brochure learned significantly more than those who had not, the type of brochure appeared to have no significant impact on the level of learning even though similar research in indoor museum settings had shown that people learn more from inquiry-style interpretive text. Korn (1988) argued that outdoor learning environments such as botanic gardens accommodated a broader range of visitor expectations and learning behaviours because of the more diffuse recreational focus.

Evaluation Techniques

The techniques of evaluation are many and varied, and include counting or observing visitors, interviewing visitors or staff (in either structured or unstructured ways), or asking visitors to fill in questionnaires. Both quantitative and qualitative information can be collected in these ways. Mock-ups or models may be used to stimulate visitor involvement. In front-end and formative evaluation, focus groups are often used. This involves up to twelve people with similar interests or backgrounds being brought together for about two hours, to explore the relevant issue or proposed service under the guidance of an experienced facilitator. Here the collected information is qualitative, emphasising perceptions, feelings, motivations, desires, opinions and attitudes.

Cost should not be an issue in remedial evaluation, which requires keen detective work more than fancy evaluation techniques. Being dedicated to continual improvement, as staff look for clues in visitors’ behaviour and comments, and being humble enough always to respond to those clues, is really what makes evaluation effective. In high-cost consumer research, for example, the corporate clients view their focus groups through one-way mirrors or closed-circuit TV, with sessions recorded on video. But the benefits of focus groups need not be forsaken by those with more limited budgets, the crucial elements are: the focus (what exactly is being discussed?), the non-judgemental nature of the discussion (an individual’s perceptions and opinions are valid and should be noted, even when they are not shared by others), and the effective neutrality of the facilitator (whose aim is to help the group address all aspects of the topic, not support any specific response).


There are many reasons why evaluation should be an essential and integral feature of the practices in botanic gardens, and many ways in which it can be implemented for minimal cost. But there is no recipe or magic formula that will work in every case. Quantitative and qualitative methods all have their advantages and disadvantages when it comes to designing, implementing, reporting or using evaluation studies (for detailed discussion see evaluation texts such as Miller 1991; Rossi & Freeman 1993). The challenge is to find a judicious balance in each situation, and to recognise that every little clue is important: informal conversations with visitors, noticing the shortcuts when paths go the ‘wrong way’, and observing how children use the gardens are all forms of evaluation. Whatever your budget, whatever your situation, don’t just think about evaluation—do it!


Beckmann, E.A. (1988) Interpretation in Australia—Some Examples outside National Parks. Australian Parks and Recreation 24 (3): 8-12.
Bennett, E. S. (1995) The Psychological Benefits of Public Gardens for Urban Residents. Master's thesis, Longwood Graduate Program, University of Delaware, USA.
Hood, M.G. (1988) Arboretum visitor profiles as defined by the four seasons in Bitgood, S., Roper, J.T. & Benefield, A. (eds) Visitor Studies 1988: Theory, Research and Practice. Jacksonville, Alabama, USA: The Center for Social Design.
Hood, M. G. & Roberts, L.C. (1994) Neither too young or too old: A comparison of visitor characteristics. Curator, 37(1) 36-45.
KRI—Kebun Raya Indonesia website; as accessed 3.9.00.
Korn, R. (1988) Self-guiding Brochures: An Evaluation. Curator, 31(1) 9-19.
Miller, D. C. (1991) Handbook of Research Design and Social Measurement. Thousand Oaks Sage Publications.
Rossi, P. H. & Freeman, H.E. (1993) Evaluation: A Systematic Approach. Newbury Park, Sage Publications.
Sutherland, L. A. (1996) Interpretation and Visitor Services: An Evaluation of Policies and Practices in Australia’s Botanic Gardens. Master of Applied Science thesis, Charles Sturt University, New South Wales, Australia.

L’évaluation Dans Les Jardins Botaniques - Luxe ou Nécessité?


L’évaluation pose des questions telles que ‘que se passerait-il si...?’, ‘est-ce que cela peut marcher?’ et ‘comment améliorer’? Les résultats de l’évaluation permettent de répondre à la question la plus importante qui est ‘si nous ne savons pas où nous étions et où nous allons, comment pouvons-nous savoir où nous sommes?’

A la base, une évaluation consiste à collecter des informations, les analyser dans leur contexte, faire un compte rendu des résultats et de leur implication et en déduire des recommandations pour agir. Dans les jardins botaniques, l’évaluation est particulièrement importante pour s’assurer que les aménagements éducatifs et récréatifs et les services correspondent aux attentes des visiteurs autant qu’aux objectifs des responsables.

L’évaluation peut permettre aux responsables du jardin de classer par ordre d’importance les principales attentes des visiteurs à partir de l’étude des fluctuations des visites. De plus, les informations qualitatives sur les perceptions des responsables et des visiteurs, les valeurs, les attitudes , les besoins et les réactions affectives (émotionnelles) sont des éléments clé à connaître pour le développement et la mise en place efficaces des projets. Il existe plusieurs types d’évaluation : l’évaluation de départ, formative, de remédiation et sommative. Les techniques utilisées sont nombreuses et variées, depuis les questionnaires, entretiens avec des visiteurs et les responsables, le décompte et l’observation des visiteurs jusqu’aux conversations informelles avec les visiteurs et l’observation de la façon dont les enfants utilisent le jardin. Le défi pour les jardins botaniques est de trouver la meilleure technique d’évaluation quelle que soit le budget et la situation. L’évaluation, ne faites pas qu’y penser, faites la!

La Evaluacion en los Jardines Botanicos – Lujo o Necesidad?


La evaluacion le encuentra respuesta a las preguntas como : ‘¿Y si …?’ ‘¿Que pasaria si…?’, ‘¿Funcionaria eso?’ y ¿’Como podria hacerse mejor?’ Aun mas importante, los resultados de una evaluacion le dan la respuesta a la pregunta mas importante de todos, ‘¿Si no sabemos de donde venimos ni adonde vamos, como vamos a saber si hemos llegado?’

A nivel elemental, la evaluacion se trata de recoger informacion; analizar esta informacioin en su contexcto, reportar tanto los resultados como sus implicaciones, y, como resultado directo de estas implicaciones, sacar recomendaciones para las actuaciones. En los jardines botanicos, la evaluacion es especialmente importante en asegurarse que las facilidades y los servicios educativos y recreativos estan conformes con las necesidades de los visitantes y a la vez con los objectivos de la direccion.

La evaluacion puede ayudar al personal del jardin botanico a priorizar las necesidades de los visitantes, al investigar tal informacion cuantitaviva como las preferencias y los patrones de las visitas. Ademas, el investigar la information cualitativa sobre las percepciones, valores, actitudes, necesidades y reacciones afectivas del personal y los visitantes contribuye una informacion clave para la eficacia en la planificacion y en la implementacion. Hay varias maneras de evaluar, inlcuyendo la evaluacion desde el frente, la formativa, la remedial, y la anadida. Las tecnicas utilizadas son muchas y variadas, desde los cuestionarios a los visitantes, las entrevistas con estos y con el personal, el contar y observar los visitantes, las conversaciones informales y el observar como los ninos utilizan el jardin. El desafio para los jardines botanicos es el de como identificar la mejor tecnica de evaluacion para cada situacion especifica sea cual sea el presupuesto, sea cual sea la situacion. No pienses sobre la evaluacion – ¡hazlo!

Receive Roots Regularly
Roots is a bi-annual international education review and essential reading for anyone working in the area of environmental education. Content is in English, French and Spanish. You can receive your own personal copy hot off the press, with the BGCI Education Pack. Click the pic to find out how...