Predicting 30‐Day Hospital Readmissions in Acute Myocardial Infarction: The AMI “READMITS” (Renal Function, Elevated Brain Natriuretic Peptide, Age, Diabetes Mellitus, Nonmale Sex, Intervention with Timely Percutaneous Coronary Intervention, and Low Systolic Blood Pressure) Score
Background Readmissions after hospitalization for acute myocardial infarction (AMI) are common. However, the few currently available AMI readmission risk prediction models have poor‐to‐modest predictive ability and are not readily actionable in real time. We sought to develop an actionable and accurate AMI readmission risk prediction model to identify high‐risk patients as early as possible during hospitalization.
Methods and Results We used electronic health record data from consecutive AMI hospitalizations from 6 hospitals in north Texas from 2009 to 2010 to derive and validate models predicting all‐cause nonelective 30‐day readmissions, using stepwise backward selection and 5‐fold cross‐validation. Of 826 patients hospitalized with AMI, 13% had a 30‐day readmission. The first‐day AMI model (the AMI “READMITS” score) included 7 predictors: renal function, elevated brain natriuretic peptide, age, diabetes mellitus, nonmale sex, intervention with timely percutaneous coronary intervention, and low systolic blood pressure, had an optimism‐corrected C‐statistic of 0.73 (95% confidence interval, 0.71–0.74) and was well calibrated. The full‐stay AMI model, which included 3 additional predictors (use of intravenous diuretics, anemia on discharge, and discharge to postacute care), had an optimism‐corrected C‐statistic of 0.75 (95% confidence interval, 0.74–0.76) with minimally improved net reclassification and calibration. Both AMI models outperformed corresponding multicondition readmission models.
Conclusions The parsimonious AMI READMITS score enables early prospective identification of high‐risk AMI patients for targeted readmissions reduction interventions within the first 24 hours of hospitalization. A full‐stay AMI readmission model only modestly outperformed the AMI READMITS score in terms of discrimination, but surprisingly did not meaningfully improve reclassification.
What Is New?
Among current readmission risk prediction models for acute myocardial infarction, the acute myocardial infarction READMITS score (renal function, elevated brain natriuretic peptide, age, diabetes mellitus, nonmale sex, intervention with timely percutaneous coronary intervention, and low systolic blood pressure) is the best at identifying patients at high risk for 30‐day hospital readmission; is easy to implement in clinical settings; and provides actionable data in real time.
What Are the Clinical Implications?
The acute myocardial infarction READMITS score can be used by clinicians at bedside to accurately predict which patients hospitalized with acute myocardial infarction are at high risk for readmissions within the first 24 hours of admission, allowing for targeted readmission reduction interventions.
Hospital readmissions after acute myocardial infarction (AMI) are frequent, costly, and potentially avoidable.1, 2, 3, 4 Nearly 1 in 6 patients with AMI have an unplanned readmission within 30 days of discharge, accounting for over $1 billion of annual healthcare costs.1, 2 Since 2012, hospitals have been subject to financial penalties for excessive 30‐day readmissions among patients hospitalized for AMI under the Hospital Readmissions Reduction Program, implemented by the Centers for Medicare and Medicaid Services (CMS). Although federal readmission penalties have incentivized readmissions reduction intervention strategies (known as transitional care interventions), these interventions are resource intensive, are most effective when implemented well before discharge, and have been only modestly successful when applied indiscriminately to all inpatients.5, 6, 7, 8
Predicting which patients with AMI are at highest risk for readmission would enable both clinicians and hospitals to proactively identify and target patients who are the most likely to benefit from intensive readmission prevention interventions, simultaneously optimizing the allocation of scarce intervention resources and maximizing the potential for a successful and sustainable intervention.9, 10 Head‐to‐head comparisons of multicondition versus disease‐specific readmission risk prediction models suggest that disease‐specific models are superior.11 However, a recent systematic review of AMI‐specific readmission models found that current models have only modest predictive ability and with uncertain generalizability because of methodological limitations.12 Furthermore, few existing AMI‐specific models have the potential to provide actionable data early during a patient's hospital course, which is the optimal time to initiate interventions to maximize effectiveness.5, 8, 12
Thus, the objectives of this study were (1) to create a pragmatic, actionable, and accurate prediction model to identify patients with AMI at high risk for 30‐day readmission as early as possible during hospitalization (ie, on the first day); (2) to assess whether including clinical data from the full hospital stay would meaningfully improve model performance compared with using data only from the first day of hospitalization; and (3) compare our AMI models to other published readmissions models.13, 14, 15
Study Design, Population, and Data Sources
The data, analytic methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure because of the terms of our data use agreements. We conducted a retrospective observational cohort study using electronic health record (EHR) data routinely collected as part of clinical care from 6 diverse hospitals with percutaneous coronary intervention capabilities located in the Dallas‐Fort Worth Metroplex in north Texas from 2009 to 2010, including safety net, community, teaching, and nonteaching hospitals. All hospitals used the Epic EHR (Epic Systems Corporation, Verona, WI). Details of this cohort have been previously published.11, 14, 15, 16, 17 We used data from 2009 to 2010, before hospital‐based readmission interventions became widespread, to ensure that AMI cohorts across all 6 hospitals were comparable. (Although penalties under the CMS Hospital Readmissions Reduction Program were not administered until 2012, many hospitals across the country including in our region began implementing interventions in 2010 after the Patient Protection and Affordable Care Act was signed into law.18)
We included consecutive hospitalizations among adults ≥18 years old discharged with a principal diagnosis of AMI (International Classification of Diseases Ninth Revision, Clinical Modification [ICD‐9‐CM] codes 410.xx, excluding 410.x2 for subsequent episode of care for AMI), consistent with the definition used by CMS for the Hospital Readmissions Reduction Program.19 For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who were transferred to another acute care facility, left against medical advice, who died during hospitalization or within 30 days of discharge, or who did not have any abnormal troponin values during hospitalization.
The primary outcome was all‐cause 30‐day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100‐mile radius of Dallas, ascertained from an all‐payer regional hospitalization database.
We included all variables from our previously published multicondition EHR readmission models as candidate predictors, including sociodemographics, prior utilization, Charlson comorbidity index, select laboratory and vital sign abnormalities, length of stay, hospital complications (eg, venous thromboembolism), and disposition status.11, 14, 15 Laboratories and vital signs were categorized and/or dichotomized based on cut points identified in previous studies.11, 14, 15 We also assessed additional variables specific to AMI for inclusion that met the following criteria: (1) available in the EHR of all participating hospitals; (2) routinely collected or available at the time of admission or discharge; and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise of our multidisciplinary research team. These included select comorbidities such as coronary artery disease, depression, diabetes mellitus, hypertension or chronic kidney disease; AMI‐related severity of illness on admission (ie, any of the following occurring within the first 24 hours of admission: presence of heart strain defined as elevated brain natriuretic peptide [BNP] serum level, presence of shock defined as systolic blood pressure ≤100 mm Hg, ST‐elevation myocardial infarction [ICD‐9 codes 410.x, excluding 410.7 for non–ST‐segment–elevation myocardial infarction], elevated troponin level, and transfer to the critical care or intensive care unit); in‐hospital complications and procedures (ie, use of intravenous diuretics as a proxy for acute decompensated heart failure; undergoing coronary artery bypass grafting after the first 24 hours; receipt of blood transfusion as a proxy for potential bleeding complications, since diagnosis codes for such complications were infrequently documented); and the presence of laboratory and vital sign abnormalities within 24 hours of discharge.
We developed 2 separate AMI‐specific models: 1 incorporating data from only the first 24 hours of hospitalization, termed the “first‐day” AMI model, and a second model incorporating data from the full hospital stay, termed the “full‐stay” AMI model. We classified candidate predictors as available either within 24 hours of admission, or by the time of discharge. For example, sociodemographic factors could be ascertained within the first 24 hours of hospitalization, whereas length of stay would not be available until discharge. Clinical predictors with missing values (ie, comorbidities, laboratory values) were assumed to be either not present (for comorbidities) or normal (for laboratory values). Nonclinical predictors such as sociodemographic characteristics, prior utilization, and disposition status as well as vital signs had very few missing values (<1% for each variable). Data on laboratory values were missing for <2% of subjects, aside from brain natriuretic peptide levels (addressed below). We assessed univariate relationships between readmission and each candidate predictor using a prespecified significance threshold of P≤0.20.
Because of the use of both BNP and N‐terminal pro‐B‐type natriuretic peptide (NT‐proBNP) across hospital sites, we categorized natriuretic peptide levels as follows: low=BNP <50 pg/mL or NT‐proBNP <125 pg/mL; moderate=BNP 51 to 99 pg/mL or NT‐proBNP 125 to 299 pg/mL; high=BNP 100 to 999 pg/mL or NT‐proBNP 300 to 4999 pg/mL, and extremely high BNP ≥1000 pg/mL or NT‐proBNP ≥5000 pg/mL. Values for BNP and/or NT‐proBNP were not present for 39% of individuals; these were imputed as “normal/not elevated.” Similarly, because of the use of both troponin I and several different assays for troponin T across hospitals, we transformed troponin into an ordinal variable, defined as multiples of the upper limit of normal, using each hospital's specified reference values. Because neither of these approaches yielded improvement in model performance, we dichotomized both variables to maximize parsimony, using BNP ≥50 pg/mL or NT‐proBNP ≥125 pg/mL to define “elevated” BNP and >10 times the upper limit of normal to define “elevated” troponin.
Significant univariate candidate variables were entered in respective first‐day and full‐stay AMI‐specific multivariable logistic regression models using stepwise backward selection with a prespecified significance threshold of P≤0.10. In sensitivity analyses, we alternatively derived our models using stepwise forward selection using a significance threshold of P≤0.10, as well as stepwise backward selection minimizing the Bayesian Information Criterion and Akaike Information Criterion separately, and also derived models for the composite outcome of both 30‐day readmissions and mortality in addition to readmissions only. These alternate modeling strategies yielded models with predictors and effect sizes nearly identical to those in our final models (data not shown).
Rationale for approach to “missing” data.
Our aim was to create a pragmatic prediction model to identify patients with AMI at high risk for 30‐day readmission using readily available real‐time clinical data from the EHR. Because our models were based on existing data collected as part of clinical care (ie, not research or registry data), the rationale for our approach to treating “missing” data on comorbidities and laboratory data (including BNP and NT‐proBNP) as equivalent to “not present/normal” was based on our understanding of clinical documentation workflows, which are largely governed by the concept of “documentation by exception.” This refers to the phenomenon that documentation of comorbidities and laboratory values in the EHR typically only occurs when there is an exception to the expectation that these are not present. For example, “diabetes mellitus” is commonly documented but “diabetes mellitus not present” is rarely documented in medical records used for clinical care. Thus, lack of explicit documentation of diabetes mellitus is highly likely to indicate that diabetes mellitus is in fact not present. Additionally, lack of explicit documentation (ie, “missing” data) on comorbidities in medical records is not likely to be random. This is in contrast to research and/or registry data, where the presence of comorbidities of interest such as diabetes mellitus is more likely to be consistently ascertained across subjects and clearly documented as either “present” or “not present,” and thus a missing value would not necessarily be considered equivalent to “not present.”
Similarly, with respect to BNP and NT‐proBNP, in clinical practice, physicians typically only order these tests in the presence of signs, symptoms, or history that raise clinical suspicion for myocardial strain or heart failure—ie, this is a laboratory test that they would “order by exception,” to parallel the concept of “documentation by exception.” Conversely, the lack of a BNP value implies that the treating physician did not have a concern for a new clinical abnormality, which is valuable clinical information to take into consideration in an EHR‐based prediction model. Thus, we thought it was reasonable to assume that patients who did not have an NT‐proBNP were deemed to be at lower risk for myocardial strain and/or heart failure by treating physicians.
Because the purpose of our study was to develop a pragmatic model based on available clinical data, and because data on comorbidities and laboratory tests such as BNP and NT‐proBNP were not missing at random as described above, we did not apply multiple imputation to impute missing values for our cohort. Our approach is consistent with the approach used in the development of the CMS AMI model, which is based on the presence of comorbidities coded in administrative claims data, and is also the same approach we have used in our past studies on readmission risk prediction modeling.11, 13, 14, 15
We validated both the AMI‐specific models using 5‐fold cross‐validation, randomly dividing the cohort into 5 equally sized subsets.20 For each cycle, 4 subsets were used for training to estimate model coefficients and the fifth was used for validation. This cycle was repeated 5 times such that each of the 5 subsets was used once as the validation set. We then repeated this entire process 50 times and averaged the C‐statistic estimates to derive an optimism‐corrected C‐statistic. We qualitatively assessed calibration by comparing observed to predicted probabilities of readmission by quintiles of predicted risk, and with the Hosmer‐Lemeshow goodness‐of‐fit test.21, 22
We derived a point‐based risk scoring system for our final first‐day AMI model. We assigned points to each variable by dividing each β‐coefficient by the lowest overall β‐coefficient and rounding to the nearest integer. We determined point cutoffs to define quintiles of predicted risk and assessed calibration separately from that of the corresponding logistic regression equation model.
We compared the first‐day and full‐stay AMI models to each other as well as to the corresponding multicondition EHR models our group has separately developed and the CMS AMI model derived from administrative claims data.13, 14, 15 We compared each existing model's performance using the C‐statistic, integrated discrimination index, and net reclassification index (NRI) using the AMI‐specific models as references.23 The integrated discrimination index is defined as the difference in the mean predicted probability of readmission between patients who were and were not actually readmitted between 2 models, where more positive values suggest improvement in model performance compared with a reference model.24 The NRI is defined as the sum of the net proportions of correctly reclassified people with and without the event of interest compared with a reference model.24, 25 Here, we calculated a category‐based NRI to evaluate the performance of AMI‐specific models in correctly reclassifying individuals with and without readmissions into the highest readmission risk quintiles versus the lowest 4 risk quintiles compared with other models. This prespecified cutoff is relevant for hospitals interested in identifying the highest risk individuals for targeted intervention.10 Finally, we assessed calibration of comparator models in our cohort. We conducted analyses using Stata 12.1 (StataCorp, College Station, TX). The UT Southwestern institutional review board reviewed and approved this study with a waiver of informed consent.
Of 826 index AMI hospitalizations, 13.0% had a 30‐day readmission. Individuals with a readmission had markedly different sociodemographic and clinical characteristics compared with those who were not readmitted (Table 1). Troponin values were similar among patients with and without a readmission. ST‐segment–elevation myocardial infarction was less common among those who were readmitted (21.2% versus 26.2%, P=0.30, Table 1).
Performance of a First‐Day AMI Model: the AMI “READMITS” Score
Our final first‐day model, termed the AMI “READMITS” score, included 7 variables: renal function (serum creatinine >2 mg/dL); elevated BNP; age (per decade >18 years); diabetes mellitus history; not male, no intervention with timely percutaneous coronary intervention; and systolic blood pressure <100 mm Hg (Table 2). The AMI READMITS score had good discrimination (C‐statistic 0.75, 95% confidence interval [CI], 0.70–0.80; optimism‐corrected C‐statistic 0.73, 95% CI, 0.71–0.74; Table 3). It also effectively stratified individuals across a broad range of risk (average predicted risk by decile ranged from 2.1% to 41.1%) and was well calibrated, with less than a 2% difference between mean predicted and observed readmission rate by quintiles (Table 4). Approximately one third of patients predicted to be at high risk (AMI READMITS score ≥20) had an observed 30‐day readmission versus only 2% of patients who were predicted to be at low risk (AMI READMITS score ≤13).
Performance of a Full‐Stay AMI Model
Our final full‐stay AMI model included 10 variables, including all 7 predictors from the first‐day model, and 3 additional predictors available at discharge: use of intravenous diuretic medications at least once during hospitalization, anemia on discharge, and discharge to a post–acute care facility. The full‐stay AMI model also had good discrimination (C‐statistic 0.78, 95% CI, 0.74–0.83; optimism‐corrected C‐statistic 0.75, 95% CI, 0.74–0.76), stratified individuals across a broad range of risk (with average predicted risk by decile ranging from 1.6% to 43.9%; Table 3), and was well calibrated (Figure).
AMI READMITS Score Versus the Full‐Stay AMI Model
Although the full‐stay AMI model had modestly better discrimination than the first‐day AMI READMITS score (P=0.001 for comparison), it did not meaningfully improve net reclassification (NRI 0.04, 95% CI, −0.03 to 0.11; Table 3). The AMI READMITS score and the full‐stay AMI model were similarly well calibrated, with modest overestimation of predicted risk in the highest and lowest risk quintiles by both models (Figure).
AMI READMITS Score Versus Other Models
The AMI READMITS score outperformed the first‐day multicondition model with better discrimination (C‐statistic 0.75 versus 0.70, P=0.04) and had improved net reclassification (Table 3). The AMI READMITS score was better calibrated than the first‐day multicondition EHR model, which overestimated risk in the lower 3 quintiles and underestimated risk in the top 2 quintile risk groups (Figure).
Compared with the CMS AMI administrative model, the AMI READMITS score had similar discrimination with no meaningful improvement in net reclassification (NRI 0.03, 95% CI, −0.07 to 0.14) (Table 3). However, the AMI READMITS score stratified individuals into a much broader range of average predicted risk (2.1–41.1% versus 7.2–24.3%, Table 3) and was better calibrated (Figure).
Using data from 6 diverse hospitals, we developed and validated the AMI READMITS score, a parsimonious risk prediction score that can be used by clinicians and hospital systems to identify patients hospitalized with AMI at high risk for 30‐day readmission within the first 24 hours of admission. The AMI READMITS score, derived from an AMI‐specific model using EHR data from the first day, outperformed most other models—including our own multicondition EHR models—in all aspects of model performance (discrimination, calibration, and reclassification). Surprisingly, incorporating more data from the full hospital stay into the AMI READMITS score only modestly improved discrimination but did not meaningfully improve calibration or net reclassification.
The limited improvement in performance of the full‐stay AMI model compared with the first‐day AMI READMITS score suggests that in‐hospital factors such as clinical stability, trajectory during hospitalization, and disposition status are less important predictors of readmissions among patients hospitalized with AMI than in other conditions such as pneumonia.11 Thus, a key finding of our study is that patients' readmission risk can be accurately predicted with the AMI READMITS score on the first day of hospitalization, enabling targeted early intervention to maximize the potential benefit of readmission reduction interventions.5, 8 This approach can be implemented by clinicians at bedside, or by hospitals and health systems by integrating the AMI READMITS score directly into the EHR, or by extracting EHR data in near real‐time, as our group has previously done for heart failure.10
A second key finding of our study is that clinical severity measures directly related to the AMI (shock, heart strain or failure, renal dysfunction) and timely percutaneous coronary intervention were strong predictors of readmission risk. Unlike our multicondition and pneumonia‐specific readmission models, key nonclinical factors such as social factors (ie, marital status, or residing in a low‐income neighborhood), as well as in‐hospital clinical trajectory and complications (ie, changes in clinical status, vital sign abnormality on discharge) were not predictive of 30‐day readmissions in AMI.11, 14 The implications are that readmission risk in AMI may be more influenced by clinical interventions and comorbidity management than in other conditions.
Although elevated cardiac troponin levels have been previously found to be predictive of adverse cardiac events in various settings and populations,26, 27 we were surprised to find that the magnitude of troponin elevation was not different between patients who were and were not readmitted, nor was this an independent predictor of adverse 30‐day outcomes even in models predicting the composite outcome of 30‐day readmissions and mortality that we derived as part of our sensitivity analyses (data not shown). Previous studies have suggested that elevated BNP—which was included in the AMI READMITS score—rather than elevated troponin may be a better predictor of adverse events in patients with acute coronary syndrome.28 Furthermore, the downstream clinical consequences of AMI may matter more than simply the magnitude of infarction as measured through troponin level, since cardiogenic shock, heart strain, heart failure, and renal dysfunction were strongly predictive of readmission in our models.29
Another key implication of our study is that for AMI, a disease‐specific modeling approach has better predictive ability than using a multicondition approach. Compared with a first‐day multicondition model, the AMI READMITS score correctly reclassified an additional 18% of patients. Thus, hospitals interested in identifying the highest risk patients with AMI for targeted interventions should do so using the disease‐specific AMI READMITS score. Of note, another disease‐specific model, the CMS AMI administrative model, had similar discrimination but poorer calibration than the AMI READMITS score in this cohort. Additionally, the CMS AMI model is not usable in clinical settings for near real‐time risk prediction, since it is based on 31 variables ascertained from claims data not available until well after discharge.13
Our study was notable for several strengths. First, the AMI READMITS score is parsimonious and incorporates clinically relevant predictors available within the first day of hospital admission that can be easily calculated by clinicians. Second, we used routinely collected and available data from a common commercial EHR system, allowing for implementation through automation and integration directly into the EHR. Third, our study population was derived from 6 hospitals diverse in payer status, age, race/ethnicity, and socioeconomic status, increasing the potential generalizability of our findings. Fourth, our models are less likely to be overfit to the idiosyncrasies of our data given that the predictors in our final AMI‐specific models have good clinical face validity, and have been associated with adverse outcomes, particularly mortality, in prior studies of this population.28, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39
Our results should be interpreted within the context of several limitations. First, generalizability to other regions or settings is unknown. Our inclusion of a large sample of diverse patients treated in 6 different hospitals (including safety net, community, teaching and nonteaching institutions) should minimize this concern. Future studies should focus on external validation of this model in other populations and settings. Second, while we used cross‐validation and optimism‐corrected estimates of the C‐statistic to reduce the risk of overfitting the AMI READMITS score to our data, the AMI READMITS score has not yet been externally validated in a separate cohort, which would further strengthen its validity. Third, we were unable to include data on medications (aspirin, β‐blockers, and angiotensin‐converting enzyme inhibitors), AMI care process measures (door‐to‐balloon time), and clinical characteristics of AMI (location, size of infarction), which may also influence readmission risk.
In conclusion, the AMI READMITS score is parsimonious, uses clinically relevant risk factors, outperformed the CMS AMI and our previous EHR multicondition readmission prediction models, and yields actionable data on the first day of hospitalization to enable early prospective identification of high‐risk AMI patients for targeted readmissions reduction interventions. The AMI READMITS score can be easily implemented by clinicians at the bedside and/or by hospitals with integration directly into the EHR for near real‐time use.
Sources of Funding
This work was supported by the Agency for Healthcare Research and Quality‐funded UT Southwestern Center for Patient‐Centered Outcomes Research (R24 HS022418‐01) and the Commonwealth Foundation (#20100323). Dr Nguyen received funding support from the UT Southwestern KL2 Scholars Program (KL2 TR001103) and the National Heart, Lung, and Blood Institute (NHLBI K23 HL13341‐01). Dr Makam received funding support from the National Institute on Aging (NIA K23 AG052603). Dr Halm was supported in part by the National Center for Advancing Translational Sciences at the National Institutes of Health (U54 RFA‐TR‐12‐006). The study sponsors had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; or preparation, review, or approval of the article.
Findings from this study were presented at the American Heart Association Quality of Care and Outcomes Research Scientific Sessions, April 2 to 3, 2017, in Arlington, VA and at the Society of General Internal Medicine Annual Meeting, April 19 to 22, 2017 in Washington, DC.
- ↵Fingar K, Washington R. Trends in Hospital Readmissions for Four High‐Volume Conditions, 2009–2013. Rockville, MD: Agency for Healthcare Research and Quality; 2015.
- ↵Yale New Haven Health Services Corporation Center for Outcomes Research and Evaluation . Medicare Hospital Quality Chartbook: Variation in 30‐Day Readmission Rates Across Hospitals Following Hospitalization for Acute Myocardial Infarction. Baltimore, MD: Centers for Medicare & Medicaid Services; 2015.
- ↵Yale New Haven Health Services Corporation Center for Outcomes Research and Evaluation . Medicare Hospital Quality Chartbook 2014: Performance Report on Outcome Measures. Baltimore, MD: Centers for Medicare and Medicaid Services; 2014.
- ↵Desai NR, Ross JS, Kwon JY, Herrin J, Dharmarajan K, Bernheim SM, Krumholz HM, Horwitz LI. Association between hospital penalty status under the hospital readmission reduction program and readmission rates for target and nontarget conditions. JAMA. 2016;316:2647–2656.
- ↵Amarasingham R, Patel PC, Toto K, Nelson LL, Swanson TS, Moore BJ, Xie B, Zhang S, Alvarez KS, Ma Y, Drazner MH, Kollipara U, Halm EA. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22:998–1005.
- ↵Makam AN, Nguyen OK, Clark C, Zhang S, Xie B, Weinreich M, Mortensen EM, Halm EA. Predicting 30‐day pneumonia readmissions using electronic health record data. J Hosp Med. 2017;12:209–216.
- ↵Smith LN, Makam AN, Darden D, Mayo H, Das SR, Halm EA, Nguyen OK. Acute myocardial infarction readmission risk prediction models: a systematic review of model performance. Circ Cardiovasc Qual Outcomes. 2018;11:e003885.
- ↵Krumholz HM, Lin Z, Drye EE, Desai MM, Han LF, Rapp MT, Mattera JA, Normand S‐LT. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4:243–252.
- ↵Nguyen OK, Makam AN, Clark C, Zhang S, Xie B, Velasco F, Amarasingham R, Halm EA. Predicting all‐cause readmissions using electronic health record data from the entire hospitalization: model development and comparison. J Hosp Med. 2016;11:473–480.
- ↵Amarasingham R, Velasco F, Xie B, Clark C, Ma Y, Zhang S, Bhat D, Lucena B, Huesch M, Halm EA. Electronic medical record‐based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15:39.
- ↵Nguyen OK, Makam AN, Clark C, Zhang S, Xie B, Velasco F, Amarasingham R, Halm EA. Vital signs are still vital: instability on discharge and the risk of post‐discharge adverse outcomes. J Gen Intern Med. 2017;32:42–48.
- ↵Makam AN, Nguyen OK, Clark C, Halm EA. Incidence, predictors, and outcomes of hospital‐acquired anemia. J Hosp Med. 2017;12:317–322.
- ↵Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission rates after passage of the hospital readmissions reduction program: a pre‐post analysis. Ann Intern Med. 2017;166:324–331.
- ↵Centers for Medicare & Medicaid Services . The Joint Commission. Specifications manual for national hospital inpatient quality measures, version 5.0b. Effective 2015
- ↵Vittinghoff E, Glidden DV, Shiboski SC, McCulloch CE. Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models. New York: Springer; 2012.
- ↵Moons KG, Altman DG, Reitsma JB, Ioannidis JP, Macaskill P, Steyerberg EW, Vickers AJ, Ransohoff DF, Collins GS. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (tripod): explanation and elaboration. Ann Intern Med. 2015;162:W1–W73.
- ↵Vogiatzis I, Dapcevic I, Datsios A, Koutsambasopoulos K, Gontopoulos A, Grigoriadis S. A comparison of prognostic value of the levels of ProBNP and troponin T in patients with acute coronary syndrome (ACS). Med Arch. 2016;70:269–273.
- ↵Jolly SS, Shenkman H, Brieger D, Fox KA, Yan AT, Eagle KA, Steg PG, Lim KD, Quill A, Goodman SG; Investigators G . Quantitative troponin and death, cardiogenic shock, cardiac arrest and new heart failure in patients with non‐ST‐segment elevation acute coronary syndromes (NSTE ACS): insights from the Global Registry of Acute Coronary Events. Heart. 2011;97:197–202.
- ↵Marenzi G, Cabiati A, Cosentino N, Assanelli E, Milazzo V, Rubino M, Lauri G, Morpurgo M, Moltrasio M, Marana I, De Metrio M, Bonomi A, Veglia F, Bartorelli A. Prognostic significance of serum creatinine and its change patterns in patients with acute coronary syndromes. Am Heart J. 2015;169:363–370.
- ↵Milcent C, Dormont B, Durand‐Zaleski I, Steg PG. Gender differences in hospital mortality and use of percutaneous coronary intervention in acute myocardial infarction: microsimulation analysis of the 1999 nationwide French hospitals database. Circulation. 2007;115:833–839.
- ↵Anderson RD, Pepine CJ. Gender differences in the treatment for acute myocardial infarction: bias or biology? Circulation. 2007;115:823–826.
- ↵McNamara RL, Kennedy KF, Cohen DJ, Diercks DB, Moscucci M, Ramee S, Wang TY, Connolly T, Spertus JA. Predicting in‐hospital mortality in patients with acute myocardial infarction. J Am Coll Cardiol. 2016;68:626–635.
- ↵McNamara RL, Wang Y, Herrin J, Curtis JP, Bradley EH, Magid DJ, Peterson ED, Blaney M, Frederick PD, Krumholz HM; Investigators N . Effect of door‐to‐balloon time on mortality in patients with ST‐segment elevation myocardial infarction. J Am Coll Cardiol. 2006;47:2180–2186.
- ↵Berger PB, Ellis SG, Holmes DR Jr., Granger CB, Criger DA, Betriu A, Topol EJ, Califf RM. Relationship between delay in performing direct coronary angioplasty and early clinical outcome in patients with acute myocardial infarction: results from the global use of strategies to open occluded arteries in Acute Coronary Syndromes (GUSTO‐IIb) trial. Circulation. 1999;100:14–20.
- ↵De Luca G, Suryapranata H, Ottervanger JP, Antman EM. Time delay to treatment and mortality in primary angioplasty for acute myocardial infarction: every minute of delay counts. Circulation. 2004;109:1223–1225.
- ↵Webb JG, Sleeper LA, Buller CE, Boland J, Palazzo A, Buller E, White HD, Hochman JS. Implications of the timing of onset of cardiogenic shock after acute myocardial infarction: a report from the shock trial registry. Should we emergently revascularize occluded coronaries for cardiogenic shock? J Am Coll Cardiol. 2000;36:1084–1090.