Search Site


Journal Entries


Stay Informed

Sign Up Today to stay informed about HINZ events and relevant health informatics news!




Sponsors 2015






International Events 2014





Clinical Algorithms and Flow Charts as Representations of Guideline Knowledge

Thursday, September 1st, 2005
Rob Cook

Medical Advisor / Project Manager

New Zealand Guidelines Group, Auckland New Zealand

Guideline development and implementation relies on emerging concepts within the broad domain of health knowledge management. This paper explores these concepts as they apply to the development of clinical algorithms within New Zealand guidelines.

A common understanding of such concepts as "knowledge acquisition, representation and dissemination" is almost taken for granted.[ 1 ] Amongst people from different academic backgrounds (eg, clinical, computer science or business) these terms can now be used to convey meaning in communication between groups. In contrast, the terms "algorithm", "decision trees" and "flow chart" have often been borrowed from one academic domain and used in another. This has led to the confusing situation where, although the same words are used, the concepts and meaning conveyed can vary according to the user. If these "graphical summary" representations are to be used in health knowledge management and are to be useful to the end-user, a shared understanding of the meaning, purpose and quality of these representations is required.

This paper aims to promote a discussion of these issues by specifically evaluating the purpose and quality of algorithms developed for knowledge management in a sample of New Zealand clinical practice guidelines. Some opportunities for improvement are identified. 

There is evidence that the use of algorithms can result in faster learning, higher retention and better compliance with practice standards than standard prose text.[ 2-4 ] The various understandings of the term "algorithm|" stem from its use in different situations and for different audiences. The selection of a "mediating representation" has been seen as a critical activity in the knowledge modelling process.[ 5 ] The appropriate choice should be right for the domain (for the kinds of knowledge to be represented), right for the task (for what needs to be done with the knowledge), and right for the user (human or machine).[ 6 ] If guidelines are to be fit for both human and machine consumption algorithms will need to be right for all parts of the tri-model architecture recently proposed by Stephen Chu in this journal (structured document model, guideline knowledge element model and guideline execution model).[ 7 ]

Algorithms: The MESH term is defined by the National Library of Medicine in its standard vocabulary as "procedures consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task".[ 8 ]

Algorithms, from a computer science perspective, have also been described by Turban.[ 9 ] In this book algorithms are NOT seen as part of knowledge representation but, instead, as a knowledge base organised in different configurations to facilitate inferencing (or reasoning). These organisations can either be one of a number of schema (eg, decision tables or trees) production rules (IF … THEN … ELSE rules) or frames (data structures). Decision trees can be seen as a type of algorithm but, importantly, the use of the words algorithm and decision tree, borrowed as they were from a mathematical and computing paradigm, convey a precision and logic that is often not present when these terms are used in a guideline sense.

It is clear that specific, unambiguous and structured knowledge is required to develop algorithms/decision trees for interpretation by computer systems. In the context of artificial intelligence, algorithms are mainly a construct to assist knowledge dissemination or translation of knowledge into decision rules for an inference engine. Clinical decision support systems (CDSS), eg, have been described as systems that "use two or more items of patient data to generate case-specific advice".[ 10 ] Electronic CDSS are typically designed to integrate a medical knowledge base, patient data and an inference engine to generate case specific advice. Standards have been called for in this area and are now seen as essential to improvements in the process of "guideline transformation" for electronic CDSS.[ 11 ] Despite these standards, there are as yet no standards for text algorithms within guidelines. These textual algorithms are rarely sufficient on their own for the development of decision rules.

Flow charts: These have often been used in a business environment to model or represent knowledge. They can define process flows within a production environment or illustrate other time dependent sequences in a project management setting. The term has again been borrowed by guideline developers to describe representations of diagnostic or treatment processes occurring over time.

Risk stratification schemes:  Becoming more common in guidelines, these form part of a diagnostic process which aims to divide a population into subgroups prior to management decisions which are tailored to the level of risk within the group. These schemes often feed into a decision-tree type algorithm.

Clinical pathways:  The semantic soup is further complicated by the increasing use of "clinical pathways" and "care plans". These have been defined as "evidence based, multidisciplinary plans of care. They may be for patients who have been diagnosed with a specific condition (diagnosis based), who are having a particular procedure (procedure based), or who are presenting with a particular symptom (symptom based)".[ 12 ] They are time- and stage-oriented tools, used to synchronise the activities of health care teams to achieve predetermined patient outcomes and provide a continuum of care. The key features of these appear to be their purpose for multidisciplinary co-ordination. 

Algorithm Development as A Means of Knowledge Acquisition
There is another use for graphical representations within guidelines which relates to knowledge acquisition and learning. Often the knowledge required for guideline development is drawn from the literature following an evidence-based process. A systematic search for answers to structured questions is followed by appraisal and discussion by topic experts who draft evidence-based recommendations for practice.[ 13 ] However, much expert knowledge is implicit and/or tacit and the development of summary algorithms by groups of clinicians can help in codifying such implicit knowledge. Algorithms could theoretically then be documented and disseminated as text or translated into decision rules for electronic CDSS.

Distinguishing Features of Algorithms Used in Clinical Practice Guidelines
In guidelines, "algorithm" has come to be used as an umbrella term to describe all forms of decision tree, flowchart, risk stratification plan, protocol or care-plan/pathway, when represented graphically. These uses are summarised in table 1.

Table 1: Features of algorithms used in clinical practice guidelines

Object class Primary knowledge management use Key feature New Zealand example*
Decision tree Translation Decision/action choices Atrial fibrillation management
Flow chart Representation and dissemination Time-based choices Asthma
Risk stratification scheme Acquisition and representation Choices following diagnosis or risk- assessment, based on severity New Zealand cardiovascular disease risk charts
Clinical pathway Dissemination Multidisciplinary choice Stroke
* New Zealand examples available at

To simplify the analysis in this paper, the word algorithm is used in this global sense, to describe all the summary graphical representations found within clinical practice guidelines. It is acknowledged that this is, perhaps, an incorrect use of the word from a health informatics point of view, where the term more correctly refers to the object class decision tree referred to above. Some guideline groups develop "decision tree" algorithms retrospectively from decision tables. These tables are used in the translation process of recommendations into rules for electronic CDSS.[14-15] This process may require a further check for validity with the original authors. In practice, the process is iterative and the logic behind decision trees, decision rules and textual guideline algorithms should be the same irrespective of the sequence in which they are developed.

Algorithm development is seen as an essential part of most guideline development processes in the US, Canada, Germany, UK[16-17] and Australia.[18] In New Zealand the methodology of development is described in a handbook[19] and is taught to guideline development teams at an early meeting of the team. All NZGG guidelines listed by the National Guidelines Clearinghouse ( were selected using the advanced search function. These were reviewed against the classification proposed in table 1. The decision tree and hybrid algorithms were scored against a checklist developed from the work of Sailors[20] (see appendix) and the international standards proposed by the "Society for Medical Decision Making".[21] This was simplified to five domains using the mnemonic CLUES (see table 2).

Table 2: Proposed quality domains for guideline algorithms

Domain Descriptor
C Concise Simple description of content and intent of the algorithm, including target clinician and population groups to whom it applies
L Logical Structured flow according to a recognised standard formalism
U Unambiguous Fully defined clinical terms and ranges for decision points
E Evidence linked Evidence relating to decision points are provided as links to graded recommendations and references
S Specific purpose and type For risk assessment, diagnosis, therapy, management or referral. Classified as flowchart (time), decision tree (decision/action), clinical pathway (multidisciplinary) or risk stratification scheme (severity of disease)

The analysis is summarised in table 3 and has demonstrated several findings. Algorithms with more than one type or purpose are classed as hybrid schemes. This often indicated an attempt to simplify and summarise diagnostic, treatment and referral information onto one page.

Algorithm Type:

  • Not all NZ guidelines contain algorithms.
  • A variety algorithm type is used in clinical practice guidelines within New Zealand.
  • Most summary diagrams contain decision/action points represented by diamond shapes.
  • Diamonds are also included in diagrams where the aim is risk stratification and within flow charts that describe management choices over time.
  • Hybrid schemes are common.
  • None of the decision trees identified seem sufficient on their own to be used for translation into electronic CDSS.

Algorithm Quality:

  • All guideline algorithms found were concise and logical with defined purposes however, many used ambiguous and undefined terms.
  • Few had explicit links to evidence or recommendations.
  • The format and layout were generally, but not always, adhered to.

A further paper will describe the development of the check list used for assessing the quality of algorithms within clinical practice guidelines.

Table 3: An assessment of algorithms in New Zealand clinical practice guidelines

  Guideline title Algorithm Classification of algorithm type
1 Smoking cessation: 2002 No algorithm  
2 The diagnosis and treatment of adult asthma 4 algorithms 1 diagnostic decision tree, 1 risk stratification scheme/flow chart and 2 hybrid flow chart/decision trees for management (acute and chronic)
3 Cardiac rehabilitation No algorithm  
4 Soft tissue knee injuries: internal derangements 2 algorithms 1 decision tree for differential diagnosis and 1 for management decisions
5 Prevention of hip fracture >65 years 1 algorithm 1 risk stratification scheme for fall prevention management
6 Acute management and immediate rehabilitation after hip fracture >65 years No algorithm  
7 The assessment and management of people at risk of suicide 1 algorithm 1 hybrid risk stratification/decision tree for severity assessment and initial management. 1 form to start a care pathway
8 Assessment processes for older people 2 algorithms 1 risk stratification summary and 1 flow chart for carer support and assessment
9 The assessment and management of cardiovascular risk 1 algorithm 1 risk stratification scheme
10 Management of type 2 diabetes 5 algorithms 5 hybrid flow charts and decision trees (some also with elements of risk stratification and care pathways) glycaemic control, CVD risk, renal, eye and foot complications
11 Life after stroke 1 algorithm 1 flow chart presented as a care pathway
12 Groups at increased risk of colorectal cancer No algorithm  
13 Dyspepsia and heartburn 5 algorithms 5 decision trees arranged to flow sequentially, covering initial evaluation and management of undifferentiated dyspepsia, GORD, peptic ulcer, NSAID complications.
14 Shoulder injuries and related disorders 1 algorithm 1 decision tree for diagnosis with risk stratification for referral (red flags)
15 Women with breech presentation or previous caesarean birth 3 algorithms 3 decision trees on antenatal care for breech, breech labour and vaginal birth after prior caesarean
16 Atrial fibrillation and flutter 5 algorithms 1 risk stratification scheme and 4 linked management decision trees

This review of the quality and purposes of NZ guidelines has several limitations. First, it was restricted to guidelines available on the NZGG website and these may not be representative of other guidelines produced by professional societies in New Zealand or those that are available in print only. Second, the assumption, supported by guideline development manuals, is that all guidelines should contain a clinical algorithm. However there may be better non-graphical ways to represent some types of knowledge that were reasonably chosen by the guideline teams concerned.

New uses for clinical practice guidelines have developed over the last few years. Rather than being used as foundations for medical education, guidelines are now often used as reference documents for broader, quality improvement initiatives and as text for translation into rules for electronic CDSS. These new purposes bring new challenges in summarising knowledge (both evidence-based knowledge and expert procedural knowledge).

There is debate in the literature about the best formalism for representing guideline knowledge, and three architectures have been proposed.[ 7 ] Standards have been called for in representing guidelines as a document, a knowledge element model or an execution model.[ 22 ] The absence of commonly agreed standards for textual algorithms has created major difficulties for guideline developers, implementers and CDSS designers.[ 23 ]

Check lists have been successfully developed for assessing the methodology of guidelines quality[ 24 ] and for standardising the textual components in guidelines.[ 22 ] Consistency may be improved by using a similar consensus-based process to agree the main types of summary algorithm seen in text guidelines and to better define purpose and quality.

This consistency would be the first step towards setting standards for representing algorithms and would be welcomed by clinicians and electronic CDSS designers.

Without an agreed system of classification, the diverse methodology of developing and representing algorithms within clinical practice guidelines is a barrier to the dissemination of knowledge. The lack of standardisation will make re-use and updating guidelines difficult. Very few algorithms are explicit in referencing their sources of evidence.

Suggestions for the future
To maximise the use of algorithms for knowledge management purposes, several ideas for the future are suggested:

  1. Guideline developers could clarify the main domain with clear definitions of algorithm purpose, topic and target groups.
  2. Those who commission guidelines could clarify the task required: acquisition of implicit knowledge, representation of expert knowledge, translation for electronic CDSS.
  3. Guideline authors could consistently use the terminology, flow chart, decision tree, risk stratification scheme and algorithm for agreed diagrams.
  4. The international guideline community could agree on high quality standards that would allow for systematic assessment of algorithms.
  5. Software vendors could agree on sufficient standardisation to allow consistency but also support a diversity of representations that take into account the different needs of user groups and software systems.

There are many approaches for creating computer-interpretable guidelines that facilitate decision support.[ 25 ] Attempts to facilitate agreement across disciplines would smooth the flow of knowledge from expert clinician and knowledge engineer into guideline implementation using electronic CDSS. Ultimately, this cycle takes knowledge from a group of expert clinicians back to the clinician as end-user. Algorithms potentially have a central position in knowledge acquisition, representation and dissemination. The opportunity to realise this potential should not be missed.

Table 4: A classification overview from Sailors[ 20 ]

  Algorithm Class
Elements of a “good” algorithm 0 1 2 3 4
Concise description of content and intent of algorithm +/- + + + +
Description of inclusion and exclusion patient groups +/- + + + +
Structured repeatable algorithm +/- +/- + + +
Fully specified concepts - +/- + + +
Fully specified decision points - - + + +
Fully specified action steps - - +/- + +
Formal expression language - - - +/- +
Formalism to describe the flow of the algorithm - - - - +
Encoded links to didactics, references, on-line resources - - - - +/-
Key: + always present, - always absent, +/- may be present.


  1. Davidson C, Voss P, Knowledge Management. 2002, Auckland: Tandem Press.
  2. Grimm RH, Jr., Shimoni K, Harlan WR Jr., Estes EH Jr. Evaluation of patient-care protocol use by various providers. N Engl J Med 1975 292(10):507-11.
  3. Komaroff AL, Black WL, Flatley M, Knopp RH, Reiffen B, et al., Protocols for physician assistants. Management of diabetes and hypertension. N Engl J Med 1974; 290(6):307-12.
  4. Sox HC  Jr.Quality of patient care by nurse practitioners and physician’s assistants: a ten-year perspective. Ann Intern Med 1979; 91(3):459-68.
  5. Shiffman RN, Michel G, Essaihi A, Thornquist E. Bridging the guideline implementation gap: a systematic, document-centered approach to guideline implementation. J Am Med Inform Assoc 2004 ;11(5):418-26.
  6. Bench-Capon TJM. Knowledge representation: an approach to artificial intelligence. London: Academic Press;1990. p220.
  7. Chu S, Guideline representation formalism and electronic decision support systems: addressing the guideline – implementation gap, in Health Care and Informatics Review Online. 2005. Accessed 27 May 2005.
  8. MESH database: controlled vocabulary used for indexing articles for MEDLINE/PubMed, in National Library of Medicine, Pubmed. 2005 edition. Accessed 25 May 2005.
  9. Turban E, Aronson JE. Decision support systems and intelligent systems. 6th ed. Upper Saddle River, NJ: Prentice Hall; 2001.
  10. Wyatt J, Spiegelhalter D. Field trials of medical decision-aids: potential problems and solutions. Proc Annu Symp Comput Appl Med Care 1991: 3-7.
  11. Entwistle M, Shiffman RN.Turning guidelines into practice: making it happen with standards, in Health Care and Informatics Review Online. 2005. Accessed 27 May 2005.
  12. Ministry of Health. Toward Clinical excellence: An introduction to clinical audit, peer review and other clinical practice improvement activities. Wellington, New Zealand: Ministry of Health; 2002.
  13. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312(7023):71-72.
  14. Colombet I, Aguirre-Junco A-R, Zunino S, Jaulent M-C, Leneveut L, et al. Electronic implementation of guidelines in the EsPeR system: A knowledge specification method. Int J Med Inform 2005 ;74(7-8):597-604.
  15. Shiffman RN. Representation of clinical practice guidelines in conventional and augmented decision tables. J Am Med Inform Assoc 1997;4(5):382-93.
  16. Scottish Intercollegiate Guidelines Network. SIGN 50: A guideline developers’ handbook. 2001 last updated May 2004.  Accessed 27 May 2005.
  17. National Institute for Clincal Excellence. Guideline development methods - information for national collaborating centres and guideline developers. 2004. Accessed 27 May 2005.
  18. National Health and Medical Research Council. A guide to the development, implementation and evaluation of clinical practice guidelines. 1998. Accessed 27 May 2005.
  19. Effective Practice Institute. Handbook for the preparation of explicit evidence-based clinical practice guidelines. 2003. Accessed 27 May 2005.
  20. Sailors RM. A proposed classification scheme for multi-step clinical care algorithms:, in AMIA Symposium proceedings. 2001. Accessed 23 April 2005.
  21. Society for medical decision making committee on standardization of clinical algorithms. Proposal for clinical algorithm standards. Med Decis Making 1992 ;12(2):149-54.
  22. Shiffman RN, Shekelle P, Overhage JM, Slutsky J, Grimshaw J, et al., Standardized reporting of clinical practice guidelines: a proposal from the conference on guideline standardization. Ann Intern Med 2003;139(6):493-8.
  23. Jenders RA, Sailors RM. Convergence on a standard for representing clinical guidelines: work in health level seven. Medinfo 2004; 11(Pt 1):130-4.
  24. Cluzeau FA, Littlejohns P, Grimshaw JM, Feder G, Moran SE. Development and application of a generic methodology to assess the quality of clinical guidelines. Int J Qual Health Care 1999; 11(1):21-8.
  25. de Clercq PA, Blom JA, Korsten HH, Hasman A. Approaches for creating computer-interpretable guidelines that facilitate decision support. Artif Intell Med 2004; 31(1):1-27.