TY - JOUR
T1 - Software annotation of defibrillator files
T2 - Ready for prime time?
AU - Gupta, Vishal
AU - Schmicker, Robert H.
AU - Owens, Pamela
AU - Pierce, Ava E.
AU - Idris, Ahamed H.
N1 - Funding Information:
This study was supported in part by US National Institutes of Health grant HL 077887 (AHI) and American Heart Association National Center grant #100205 . Sponsors had no involvement in the conception, execution, or writing of this study.
Funding Information:
Dr. Idris receives grant support from the US National Institutes of Health (NIH), the American Heart Association, and the US Department of Defense. He serves as an unpaid volunteer on the American Heart Association National Emergency Cardiovascular Care Committee and the HeartSine, Inc. Clinical Advisory Board.
Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2021/3
Y1 - 2021/3
N2 - Background: High-quality chest compressions are associated with improved outcomes after cardiac arrest. Defibrillators record important information about chest compressions during cardiopulmonary resuscitation (CPR) and can be used in quality-improvement programs. Defibrillator review software can automatically annotate files and measure chest compression metrics. However, evidence is limited regarding the accuracy of such measurements. Objective: To compare chest compression fraction (CCF) and rate measurements made with software annotation vs. manual annotation vs. limited manual annotation of defibrillator files recorded during out-of-hospital cardiac arrest (OHCA) CPR. Methods: This was a retrospective, observational study of 100 patients who had CPR for OHCA. We assessed chest compression bioimpedance waveforms from the time of initial CPR until defibrillator removal. A reviewer revised software annotations in two ways: completely manual annotations and limited manual annotations, which marked the beginning and end of CPR and ROSC, but not chest compressions. Measurements were compared for CCF and rate using intraclass correlation coefficient (ICC) analysis. Results: Case mean rate showed no significant difference between the methods (108.1–108.6 compressions per minute) and ICC was excellent (>0.90). The case mean (±SD) CCF for software, manual, and limited manual annotation was 0.64 ± 0.19, 0.86 ± 0.07, and 0.81 ± 0.10, respectively. The ICC for manual vs. limited manual annotation of CCF was 0.69 while for individual minute epochs it was 0.83. Conclusion: Software annotation performed very well for chest compression rate. For CCF, the difference between manual and software annotation measurements was clinically important, while manual vs. limited manual annotation were similar with an ICC that was good-to-excellent.
AB - Background: High-quality chest compressions are associated with improved outcomes after cardiac arrest. Defibrillators record important information about chest compressions during cardiopulmonary resuscitation (CPR) and can be used in quality-improvement programs. Defibrillator review software can automatically annotate files and measure chest compression metrics. However, evidence is limited regarding the accuracy of such measurements. Objective: To compare chest compression fraction (CCF) and rate measurements made with software annotation vs. manual annotation vs. limited manual annotation of defibrillator files recorded during out-of-hospital cardiac arrest (OHCA) CPR. Methods: This was a retrospective, observational study of 100 patients who had CPR for OHCA. We assessed chest compression bioimpedance waveforms from the time of initial CPR until defibrillator removal. A reviewer revised software annotations in two ways: completely manual annotations and limited manual annotations, which marked the beginning and end of CPR and ROSC, but not chest compressions. Measurements were compared for CCF and rate using intraclass correlation coefficient (ICC) analysis. Results: Case mean rate showed no significant difference between the methods (108.1–108.6 compressions per minute) and ICC was excellent (>0.90). The case mean (±SD) CCF for software, manual, and limited manual annotation was 0.64 ± 0.19, 0.86 ± 0.07, and 0.81 ± 0.10, respectively. The ICC for manual vs. limited manual annotation of CCF was 0.69 while for individual minute epochs it was 0.83. Conclusion: Software annotation performed very well for chest compression rate. For CCF, the difference between manual and software annotation measurements was clinically important, while manual vs. limited manual annotation were similar with an ICC that was good-to-excellent.
KW - Automatic software
KW - Cardiopulmonary arrest
KW - Cardiopulmonary resuscitation
KW - Chest compressions
KW - Ventricular fibrillation
UR - http://www.scopus.com/inward/record.url?scp=85099228433&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099228433&partnerID=8YFLogxK
U2 - 10.1016/j.resuscitation.2020.12.019
DO - 10.1016/j.resuscitation.2020.12.019
M3 - Article
C2 - 33388365
AN - SCOPUS:85099228433
SN - 0300-9572
VL - 160
SP - 7
EP - 13
JO - Resuscitation
JF - Resuscitation
ER -