Volume 76, Issue 5 p. 602-607
Editorial
Open Access

Artificial intelligence for image interpretation in ultrasound-guided regional anaesthesia

J. Bowness

Corresponding Author

J. Bowness

Clinical Lecturer/Honorary Specialty Registrar

Institute of Academic Anaesthesia, University of Dundee, Dundee, UK

Department of Anaesthesia, Ninewells Hospital, Dundee, UK

Correspondence to: J. Bowness

Email: [email protected]

Search for more papers by this author
K. El-Boghdadly

K. El-Boghdadly

Consultant/Honorary Senior Lecturer

Department of Anaesthesia and Peri-operative Medicine, Guy's and St Thomas's NHS Foundation Trust, London, UK

King's College London, London, UK

Search for more papers by this author
D. Burckett-St Laurent

D. Burckett-St Laurent

Consultant

Department of Anaesthesia, Royal Gwent Hospital, Newport, UK

Search for more papers by this author
First published: 29 July 2020
Citations: 48

Here is my prophecy: In its final development, the telephone will be carried about by the individual, perhaps as we carry a watch today. It probably will require no dial or equivalent, and I think the users will be able to see each other, if they want, as they talk. —Mark R Sullivan (Pacific Telephone and Telegraph Co., 1953)

The initial challenge presented to a practitioner during ultrasound-guided regional anaesthesia is the interpretation of sono-anatomy upon placing a probe on the patient. To date, technological advancements have focused on methods to enhance needle viewing [1]. Sono-anatomical interpretation remains an under-explored avenue of research to improve the availability, efficacy and safety of regional anaesthetic techniques. We present the case for the use of artificial intelligence (AI) in identifying key anatomical features to facilitate ultrasound-guided regional anaesthesia.

Ultrasound image analysis in ultrasound-guided regional anaesthesia

Ultrasound guidance has been a major advancement in regional anaesthesia since the turn of the century. It is often accepted that ultrasound has led to improved outcomes following regional anaesthesia, although it is not clear that is has reduced the incidence of nerve trauma [2].

The American Society of Regional Anesthesia and Pain and the European Society of Regional Anaesthesia and Pain Therapy joint committee recommendations for education and training in ultrasound-guided regional anaesthesia categorise four activities [3]:
  1. Understanding device operations
  2. Image optimisation
  3. Image interpretation (locating and interpreting anatomy under ultrasound)
  4. Visualisation of needle insertion and injection (needle-probe orientation; the maintenance of needle visualisation; and optimal anatomical view whilst moving the needle towards the target object)

Much effort has been directed towards needle guidance systems and echogenic needles to improve needle visibility [1]. However, augmenting image interpretation has received less attention – despite a sound understanding and interpretation of sono-anatomy being required for the practice of ultrasound-guided regional anaesthesia [3, 4]. This is particularly pertinent as anatomical knowledge among anaesthetists is known to be imperfect [5]. Human image analysis is similarly fallible [6] and human performance is subject to fatiguability [7].

The many difficulties in acquiring and maintaining the skill sets involved in anatomical recognition and needle guidance also restrict the number of clinicians confident and able to perform ultrasound-guided regional anaesthesia. Currently, the majority of peripheral nerve blocks are performed by a restricted number of experts [4]. Breaking down these barriers may particularly enhance uptake by non-expert regional anaesthetists. Ultrasound-guided regional anaesthesia also has the potential to be employed more widely, for example, by nurse anaesthetists, emergency medicine physicians, armed forces/battlefield medical practitioners and those treating pain in the chronic pain clinic or palliative care. Widening patient access to these techniques has potential to directly address several of the anaesthesia and peri-operative care priorities of the James Lind Alliance [8].

Artificial intelligence, machine learning and deep learning in anaesthesia

Artificial intelligence is a general term which includes machine learning and deep learning (Fig. 1). There has been a recent proliferation of publications relating to the utility of AI, in particular machine learning, in the peri-operative setting [7, 9]. Most focus on systems to assimilate and analyse data input from multiple sources, to assist in pre-operative assessment and risk stratification, monitor depth of anaesthesia/sedation, enhance early detection of unwell patients, or predict intra-operative adverse events (e.g. hypotension) and postoperative outcomes (e.g. pain and mortality) (Table 1). However, implementation of these technologies in clinical practice is not yet commonplace [9].

Details are in the caption following the image
A summary of artificial intelligence, machine learning and deep learning.
Table 1. Potential artificial intelligence applications to anaesthetic practice based on examples of current evidence.
Area of practice Application
Pre-operative

Risk stratification during pre-operative assessment (to influence anaesthetic technique and for outcome prediction)

  • - Karpagavalli et al. [10] trained three supervised machine learning systems on pre-operative data (37 features) from 362 patients
  • - These systems were able to accurately categorise patients into low, medium and high-risk groups (broadly correlating with ASA grade)

Intra-operative

Automated ultrasound spinal landmark identification in neuraxial blockade

  • - Oh et al. [11] have demonstrated improved spinal ultrasound interpretation and first pass spinal success using an intelligent image processing system to identify spinal landmarks

Prediction of post-induction/intra-operative hypotension

  • - Wijnberge et al. [12] demonstrated the ability to reduce the duration and depth of intra-operative hypotension through the use of a machine learning-derived early warning system

Prediction of post-intubation hypoxia

  • - Sippl et al. [13] retrospectively analysed data from 620 cases to develop a machine learning system capable of predicting post-intubation hypoxia to the same level as that observed by medical experts

Monitoring/control of level of sedation/hypnosis

  • - Lee et al [14]. present a deep learning model, training on data sets from 131 patients, to predict bispectral index response during target-controlled infusion of propofol and remifentanil

Postoperative

Prediction of postoperative in-hospital mortality

  • - Fritz et al. [15] present a deep-learning model based on patient characteristics and peri-operative data to predict 30 day mortality

Prediction of analgesic response

  • - Misra et al. [16] use machine learning for the automated classification of pain state (high and low) based on EEG data

  • EEG, electroencephalogram.

Machine learning in ultrasound-guided regional anaesthesia

Published work includes the study of automated nerve and blood vessel identification for ultrasound-guided regional anaesthesia [17]. Indeed, medical image interpretation is a particularly popular focus of research in healthcare AI [18]. One such example is the collaboration between researchers and clinicians at DeepMind (Alphabet Inc, Palo Alto, CA, USA), Moorfields Eye Hospital and University College London, who have developed a system which reaches or exceeds expert performance in analysis of optical coherence tomography [19]. A similar collaboration has demonstrated equally successful results in the field breast cancer screening mammography; with an AI system that is capable of surpassing human experts in breast cancer prediction [20]. It thus follows that image analysis in ultrasound-guided regional anaesthesia could similarly be an area in which assistive machine learning technology may provide patient benefit.

Given the complexity, diversity and operator dependence (leading to inter- and intra-individual variation) in ultrasound appearance of anatomical structures on ultrasound, it is difficult to develop nascent AI algorithms to recognise all salient features de novo [18]. Therefore, automated medical image analysis can be trained to recognise this wide variety of appearances by ‘learning from examples’, which is the premise of machine learning [18]. Such assistive technology could be used to enhance interpretation of sono-anatomy by facilitating target identification (e.g. peripheral nerves and fascial planes), and the selection of optimal block site through demonstrating relevant landmarks and guidance structures (e.g. bone and muscle). The safety profile may be enhanced by highlighting safety structures (e.g. blood vessels) to minimise unwanted trauma.

We postulate that providing a ‘head-up display’ (display within the user's existing field of vision) of anatomy in real time, as an adjunct to the conventional narrative and instructions from an expert, may reduce the cognitive load for less experienced operators. It may also reduce time required for image acquisition and analysis and increase operator confidence. This in turn may improve performance in needle/probe manipulation by increasing spare cognitive capacity for these activities. Head-up and instrument-mounted displays have been proven to be of use in military aviation and the automotive industry [21]. Furthermore, computerised systems are not subject to fatigue and can reproducibly perform the desired activity with complete fidelity [7].

AnatomyGuide (Intelligent Ultrasound Limited, Cardiff, UK) is a system based on AI technologies. It has been developed with the use of B-mode ultrasound video for specific peripheral nerve block regions. Each video is broken into multiple frames, with each frame receiving a coloured overlay of specific structures identified as either landmarks, safety structures or targets. These labelled frames are then used to train the machine learning algorithm, which uses deep learning to develop associations between the labels and underlying structures. In time, the algorithm is able to label raw B-mode ultrasound data in real-time on new ultrasound scans of similar regions. System performance is a function of the quantity and quality of labelled data presented during training: the training set used for each block included over 120,000 images to achieve the current level of performance.

One example of a peripheral nerve block for which a model has been well developed for AnatomyGuide is the adductor canal block. Information used to train the algorithm is similar to that used for an inexperienced operator in clinical practice by identifying the relevant anatomy. In this model, the sartorius and adductor longus muscles, as well as the femur, were first identified as landmarks. The optimal block site is chosen as the region where the medial borders of these two muscles align. The femoral artery is labelled as both a landmark and safety structure. The saphenous nerve is labelled as a target. The intent is to assist the operator in identifying the nerve and correct site to target for the block (Fig. 2 and Supporting Information, Video S1).

Details are in the caption following the image
Sono-anatomy of the adductor canal block. (a) Illustration showing a cross-section of the mid-thigh. (b) Enlarged illustration of the structures seen on ultrasound during performance adductor canal block. (c) Ultrasound view during adductor canal block. (d) Ultrasound view labelled by AnatomyGuide.

Extended uses of machine learning systems in ultrasound-guided regional anaesthesia

Gaining early competencies in ultrasound-guided regional anaesthesia is particularly challenging. It is difficult to develop and use high-fidelity simulation, and training in the clinical setting can be inconsistent. Experience is often gained on an ad hoc basis, with long time intervals between episodes, and different trainers may have differing approaches. Assistive machine learning systems may provide supplementary information to facilitate ultrasound-guided regional anaesthesia training for inexperienced operators. Simply highlighting the relevant structures will aid understanding of their likely position and appearance in future ultrasound analysis. This may aid in the initial skill acquisition, and shorten the period required for direct supervision, supporting the transition to indirectly supervised/solo practice.

In the era of competency-based training, quantitative assessment and evaluation of operator expertise is important but difficult. It is often not practical in the clinical environment and innovation is required. Methods to aid assessment include an approach based on proficiency-based progression [22]. By using descriptions of ultrasound-guided regional anaesthesia performance, broken down to specific actions, machine learning analysis of data (e.g. video recording of operator, analysis of sonographic video or needle tracking technology) can provide an evaluation of the quality of operator performance. Assuming a robust and successful evaluation of such systems, this method may facilitate standardised assessment of operator performance, and reduce subjectivity in evaluation/assessment [23].

Furthermore, it has been suggested that a move towards standardising the implementation of regional anaesthesia may engage a greater body of anaesthetists in its practice [4]. Computational systems, by their nature, assess novel data in a consistent manner, thus their use could act as a conduit to facilitating the recommendation to standardise ultrasound-guided approaches to peripheral nerve blocks [4].

Potential limitations of machine learning systems in ultrasound-guided regional anaesthesia

Technological advancement is not without potential pitfalls and the regulatory landscape for AI applied to medical imaging is still developing. Few products have obtained regulatory approval to date, particularly those evaluating images in real-time. A personal teaching approach should remain central to training in ultrasound-guided regional anaesthesia and should not be replaced by ‘technological supervision’. Operators must still learn where to commence ultrasound scanning, and must assimilate the nuances of probe pressure, angulation, rotation and tilt to optimise image acquisition. Integrating AI into image analysis may allow an uneven progression of training between sono-anatomical recognition and needle-probe co-ordination.

In time, there will need to be evidence that such systems improve operator performance and patient outcomes to justify continued development and implementation in clinical practice. There is potential for inaccuracies in the labelling of anatomy in such a system; strict validation and quality control will need to apply, particularly in the context of atypical or complex clinical presentation and anatomy. Such reservations are applicable to all new AI technologies, and previous methodological concerns exist including poor validation, over prediction and lack of transparency [24].

Early models will inevitably be improved upon but even the first systems employed in clinical practice must offer superior ultrasound image analysis to the non-expert practitioner. A subsequent, and more stringent, challenge will be to ensure they augment operators with high-level expertise, but machine learning systems are not guaranteed to be superior to human performance [23] and systems should not be relied upon to replace clinician knowledge. Conversely, identifying features and associations that are not regularly viewed by eye might not improve clinical performance or outcomes.

Artificial intelligence systems for ultrasound may require the acquisition of new ultrasound machines, or be retro-fitted to current devices, both of which may understandably delay uptake and incur cost. Finally, unpredictable clinical implications will likely emerge; these should be anticipated and addressed where possible.

Conclusion

Despite early promise, the potential for utilisation of AI in medical image analysis is yet to be realised, and few applications are currently employed in medical practice [25]. In particular, machine learning for ultrasound-guided regional anaesthesia appears to have received relatively little attention. Anatomical knowledge and ultrasound image interpretation are of paramount importance in ultrasound-guided regional anaesthesia, but the human performance and teaching of both are known to be fallible. Robust and reliable AI technologies could support clinicians to optimise performance, increase uptake and standardise training in ultrasound-guided regional anaesthesia. Mark R Sullivan realised the potential of the mobile telephone decades before they impacted the public consciousness. Our belief is that AI systems in healthcare will have a similar impact, and include the field of ultrasound-guided regional anaesthesia, offering innovative solutions to change service provision and workforce education. Anaesthetists should embrace this opportunity and engage in the development of these technologies to ensure they are used to enhance the specialty in a transformative manner.

Acknowledgements

The authors would like to acknowledge the contributions of Dr F. Zmuda (Fig. 1) and Dr J. Mortimer (Fig. 2) for the production of illustrations used in this article. JB is a Clinical Advisor for and receives honoraria from Intelligent Ultrasound Limited. KE has received research, honoraria and educational funding from Fisher and Paykel Healthcare Ltd, GE Healthcare, and Ambu, and is an Editor for Anaesthesia. DL is a Clinical Advisor for and receives honoraria from Intelligent Ultrasound Limited and is the Lead Clinician on AnatomyGuide. No other competing interests declared.