Derivation and Validation of an In‐Hospital Mortality Prediction Model Suitable for Profiling Hospital Performance in Heart Failure
Background Comparing heart failure (HF) outcomes across hospitals requires adequate risk adjustment. We aimed to develop and validate a model that can be used to compare quality of HF care across hospitals.
Methods and Results We included patients with HF aged ≥18 years admitted to one of 433 hospitals that participated in the Premier Inc Data Warehouse. This model (Premier) contained patient demographics, comorbidities, and acute conditions present on admission, derived from administrative and billing records. In a separate data set derived from electronic health records, we validated the Premier model by comparing hospital risk‐standardized mortality rates calculated with the Premier model to those calculated with a validated clinical model containing laboratory data (LAPS [Laboratory‐Based Acute Physiology Score]). Among the 200 832 admissions in the Premier Inc Data Warehouse, inpatient mortality was 4.0%. The model showed acceptable discrimination in the warehouse data (C statistic 0.75; 95% confidence interval, 0.74–0.76). In the validation data set, both the Premier model and the LAPS models showed acceptable discrimination (C statistic: Premier: 0.76 [95% confidence interval, 0.74–0.77]; LAPS: 0.78 [95% confidence interval, 0.76–0.80]). Risk‐standardized mortality rates for both models ranged from 2% to 7%. A linear regression equation describing the association between Premier‐ and LAPS‐specific mortality rates revealed a regression line with a slope of 0.71 (SE: 0.07). The correlation coefficient of the standardized mortality rates from the 2 models was 0.82.
Conclusions Compared with a validated model derived from clinical data, an HF mortality model derived from administrative data showed highly correlated risk‐standardized mortality rate estimates, suggesting it could be used to identify high‐ and low‐performing hospitals for HF care.
What Is New?
In a data set derived from electronic health records, we compared 2 models that calculate hospital mortality rates for patients with heart failure and found that a model that used only billing data performed very similarly to a model that used clinical data derived from medical records.
What Are the Clinical Implications?
When using data sets that contain only information from hospitalizations (ie, those that lack prior outpatient data) or when clinical data are not available, our model that uses billing data could be useful for comparing quality of heart failure care across hospitals; it also allows for identification of hospitals with low mortality rates, creating an opportunity to conduct future studies that examine strategies of high‐performing hospitals for heart failure care.
Heart failure (HF) accounts for ≈1 million hospital admissions and $39 billion spent per year.1, 2 This large volume and high cost has led inpatient HF care to become a major focus of hospital quality measurement and improvement efforts, including public reporting of hospital‐level mortality and readmission rates on websites such as Hospital Compare.3
To ensure that observed differences in outcomes between hospitals are not largely the result of differences in patient characteristics, care must be taken to adjust for case mix when describing differences in quality across hospitals. When estimating risk‐standardized outcomes, analyses should also take into account hospital size and the clustered nature of the data (patients within hospitals) and should include only variables that are not related to the quality of care delivered.4 Finally, the risk‐adjustment model should be validated in different populations of patients and across hospitals. Although electronic health record (EHR) data are likely to become the data used for these purposes in the future, EHR data are not routinely available, meaning that there is still a need for risk‐adjustment methods that do not use clinical data. The aim of this study was to develop and validate a model that can be used to compare quality of HF care across hospitals in situations in which clinical data are not available.
A recent innovation that will facilitate severity adjustment in claims data is the development of multihospital databases (eg, Premier, University HealthSystem Consortium) that standardize highly detailed billing data across hospitals, providing time‐ or date‐stamped information about all tests and services provided to individual patients.5, 6, 7, 8, 9 Using one of these data sets, we sought to develop and validate a model that could be used to compare hospitals' performance in the care of HF patients. Then, in a separate hospital data set derived from hospital EHRs, we aimed to validate this model at the hospital level by comparing hospital risk‐standardized mortality rates (RSMRs) for our model with RSMRs calculated from a validated model that uses laboratory results to predict mortality.
Derivation and Internal Validation Cohort
We gathered data from the cost‐accounting systems of 433 hospitals that participated in the Premier Inc Data Warehouse (PDW; a voluntary, fee‐supported database) between January 1, 2009, and June 30, 2011. PDW contains all elements found in hospital claims derived from the Uniform Billing 04 form. In addition, PDW contains an itemized, date‐stamped log of all items and services charged to the patient or the insurer, including medications, diagnostic and therapeutic services, and laboratory tests. PDW includes ≈15% to 20% of all US hospitalizations. Participating hospitals are drawn from all regions of the United States, with greater representation from urban and southern hospitals. PDW has been used extensively for research purposes.5, 6, 7, 8, 9 Because the data are proprietary, we are not able to make the data set or study materials available to other researchers.
We included patients who were aged ≥18 years and had a principal International Classification of Diseases, Ninth Revision, Clinical Modification (ICD‐9‐CM) diagnosis of HF or a principal diagnosis of respiratory failure with secondary diagnosis of HF when both HF and respiratory failure were coded “present on admission” (POA; ICD‐9‐CM codes for HF: 402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93, 428.xx10, 11; for respiratory failure: 518.81, 518.82, 518.84). Given the broad set of inclusion codes, we ensured that patients were treated for acute decompensated HF during the hospitalization by restricting the cohort to patients in whom at least 1 HF therapy (including diuretics, metolazone, inotropes, vasodilators, or intra‐aortic balloon pump) was initiated within the first 2 days of hospitalization. In administrative data sets, the duration of the first hospital day includes partial days that can vary in length, so we chose the first 2 days of hospitalization (rather than just the first day) for initiation of an HF therapy. We excluded patients with a pediatric or psychiatric attending physician, those with elective admissions, and those who were transferred from or to another acute care facility (because we could not accurately determine the onset or subsequent course of their illness). For patients with repeat visits at a single hospital, 1 visit was randomly selected for inclusion. Patients were randomly assigned to a derivation cohort (80%) and an internal validation cohort (20%). The institutional review board at Baystate Medical Center granted permission to conduct the study and granted a waiver of informed consent because of the deidentified nature of the data.
External Validation Cohort
We validated the model in a population of patients with HF seen at hospitals that contributed to the HealthFacts database (Cerner Corp.) between January 2010 and December 2012. HealthFacts is a multihospital data set derived from the comprehensive EHRs of 116 geographically and structurally diverse hospitals throughout the United States. HealthFacts contains time‐stamped pharmacy, laboratory, and billing information and contains records, including clinical data such as laboratory data, of >84 million acute admissions, emergency room visits, and ambulatory visits. We limited the sample to hospitals that contributed to the pharmacy, laboratory, and diagnosis segments of the database and had at least 20 eligible HF patients during the study period (to obtain stable hospital rates) after applying the same patient‐level inclusion criteria.
The primary outcome for both cohorts was hospital‐specific risk‐standardized all‐cause in‐hospital mortality.
Patient predictors of mortality.
Using the derivation cohort, we identified candidate variables that were used in prior risk adjustment models,10, 12, 13 including patient age, sex, marital status, insurance status, and race/ethnicity. We used software provided by the Healthcare Costs and Utilization Project of the Agency for Healthcare Research and Quality14, 15 to identify the presence of comorbid conditions. In addition, we used POA codes to identify other acute conditions that are of concern in the setting of HF but that are not recognized in the Elixhauser comorbidity index. These conditions included atrial fibrillation, acute myocardial infarction, pneumonia, and acute kidney injury. Because we lacked echocardiogram results, we used ICD‐9‐CM codes to identify HF subtypes: systolic only, diastolic only, or both.
Premier model development.
In the PDW population, we used a generalized estimating equation logistic regression model, clustering on hospital, to predict each patient's in‐hospital mortality. We initially included all clinically relevant variables in the model, including variables with a well‐established association with mortality (eg, age), all conditions included in the Elixhauser comorbidity index,14 and selected comorbid acute illnesses (eg, acute myocardial infarction that was present on admission). Using backward selection, we retained variables in the final model (hereafter called the “Premier” model) with P<0.05. We calculated the area under the receiver operating characteristic curve (C statistic) and examined the model's calibration. We then applied the model coefficients to the validation cohort and examined model fit. Of note, we previously externally validated, at the patient level, the Premier model by comparing its performance to the performance of other published clinical HF models in a separate clinical data set.16
Laboratory‐Based Acute Physiology Score model development.
The Laboratory‐Based Acute Physiology Score (LAPS) is a validated score that uses laboratory data derived from an EHR to predict in‐hospital mortality across conditions, including HF. LAPS uses a 2‐stage algorithm. First, selected variables are used to stratify patients into low and high mortality risk groups. Then 14 laboratory values (anion gap; albumin; arterial oxygen, pH, and carbon dioxide; bicarbonate; bilirubin; blood urea nitrogen; creatinine; glucose; hematocrit; sodium; troponin I; white blood cell count) are added to the algorithm to calculate a score.17, 18 For laboratory values that are not available, the algorithm assigns points based upon the patient's stage‐1 mortality risk group (rather than using imputation).17, 18 Because LAPS is designed to be used as a variable in a model that includes other patients characteristics when predicting mortality, we developed a generalized estimating equation logistic mortality prediction model using the LAPS score along with age, sex, race, and comorbidities. Using backward selection, we retained variables in the final model with P<0.05. We also previously externally validated (at the patient level, in a population with HF) the LAPS model by comparing its performance to the performance of other published clinical HF models.16
Calculation of RSMRs.
For the external hospital‐level validation of the Premier model in the HealthFacts data set, we compared RSMRs derived using the Premier model to those derived using the LAPS model. To calculate RSMRs for each hospital, we fit hierarchical generalized linear models using variables selected in the model‐development step. We used generalized linear mixed models with a logit link to predict mortality and included covariates and a random effect for hospital. We assumed that random hospital effects are normally distributed and independent of hospital‐level covariates. This method adjusts for within‐hospital correlation of the observed outcomes. It also models the assumption that there are underlying differences between hospitals by allowing a random hospital intercept. For each model, we estimated RSMRs as the ratio of predicted mortality in each hospital given its patient mix and hospital‐specific effect, divided by expected mortality given the patient mix and average hospital effect (ie, the mortality if the patients were treated at the “average” hospital). Next, we used bootstrap methods to develop a 95% confidence interval (CI) estimate of risk‐standardized mortality for each hospital. To do this, we repeatedly sampled 55 hospitals with replacement (500 samples) from the 55 hospitals in the data set; we repeated the hierarchical generalized linear modeling with each sample and derived the risk‐standardized mortality for each hospital, ultimately giving us ≈500 RSMR estimates for each hospital. We then used the 2.5 and 97.5 percentiles of the distribution for each hospital as the 95% CI estimate of RSMR. To compare RSMRs derived from the Premier and LAPS models, we first examined the distribution of RSMRs using histograms. We then used linear regression to model the association between the 2 rates, weighting each hospital by number of observations.11 An intercept close to 0 and slope close to 1 would indicate similar RSMRs by the 2 models. We also calculated correlation coefficient between the RSMRs for Premier and LAPS, and the median difference between model estimates.
Description of cohort definition and sensitivity analyses.
To better understand the HealthFacts validation cohort (which included both patients with a principal diagnosis of HF and patients with a principal diagnosis of respiratory failure and secondary diagnosis of HF), we compared characteristics of patients with each principal diagnosis using χ2 or Wilcoxon tests. We then conducted a sensitivity analysis limiting our cohort to patients with a principal diagnosis of HF and compared model fit to the results from the full cohort. All analyses were carried out using SAS version 9.4 (SAS Institute) and STATA version 13 (StataCorp).
Derivation and Internal Validation Cohort
The PDW included 433 hospitals that contributed a total of 200 832 HF patients during the study period, with between 1 and 2125 patients per hospital (Table 1). Mean patient age was 73 years; approximately half were women (53%), and the majority (65%) were white. The principal diagnosis was HF in 90% of patients and respiratory failure in 10% of patients. Approximately 71% of patients had hypertension, 41% of patients had chronic obstructive pulmonary disease, 46% had diabetes mellitus, and 40% had chronic renal insufficiency. Patients commonly had additional POA acute diagnoses, including acute myocardial infarction (4%), acute kidney injury (17%), and atrial fibrillation (39%). Within the first 2 days of hospitalization, 19% of the patients were admitted to the intensive care unit, 5% received invasive mechanical ventilation, 6% received noninvasive ventilation, 5% received inotropes, and 7% received intravenous vasodilators. Overall, 8110 patients (4%) died in the hospital. The 80% derivation sample and the 20% internal validation sample were statistically similar to each other and to the larger cohort.
For external validation, we included 19 050 patients from 55 hospitals that contributed pharmacy, diagnosis, and laboratory data to the HealthFacts database. Compared with the PDW, the HealthFacts cohort had a higher percentage of black patients (28%), a slightly younger mean age (72), fewer Medicare patients (56% versus 77% in PDW), and a similar prevalence of comorbidities (Table 1). The rates of early mechanical ventilation (5%), noninvasive ventilation (7%), inotropes (6%), vasodilators (7%), and in‐hospital death (4%) were also similar.
The majority (17 391 of 19 050, or 91.3%) of the included patients had a principal diagnosis of HF, whereas the remaining 1659 had a principal diagnosis of respiratory failure with a secondary diagnosis of HF (Table 2). Nearly all (18 737 of 19 050, or 98.3%) patients included in the HealthFacts cohort received diuretics during their first 2 days of hospitalization. Among patients with a principal diagnosis of HF, 98.8% received diuretics within the first 2 days of hospitalization, and the remainder received ≥1 of the other HF therapies. Among patients with a principal diagnosis of respiratory failure, 93% received diuretics within the first 48 hours, and the remaining 7% received ≥1 of the other therapies (inotropes, vasodilators, or intra‐aortic balloon pump).
There were further differences between patients with a principal diagnosis of HF and a principal diagnosis of respiratory failure. Patients with a principal diagnosis of respiratory failure and secondary diagnosis of HF were slightly younger than patients with a principal diagnosis of HF and were more likely to be white (versus other races), to have comorbid illnesses, or to have acute conditions (Table 2). The respiratory failure group also had a much higher mortality rate (13% versus 3%) than the group with a principal diagnosis of HF.
The Premier model showed acceptable calibration across deciles of predicted mortality ranging from 0.9% to 14.9% with C statistics of 0.75 (95% CI, 0.74–0.76) in the PDW cohort and 0.76 in the validation cohort (95% CI, 0.74–0.77). For further details, please see the prior publication16; for the model coefficients, see Table 3.
LAPS also showed good calibration in the validation cohort across deciles of predicted mortality ranging from 0.7% to 16.0% with a C statistic of 0.78 (95% CI, 0.76–0.80; for the model coefficients, see Table 4).
Premier Versus LAPS: Profiling Hospitals
RSMRs in the HealthFacts data had distributions that varied slightly by model. For both Premier and LAPS, distributions of hospital‐level RSMRs ranged from 2% to 7% (Figure 1). The slope of the weighted regression line of the Premier‐ versus LAPS‐specific mortality rates was 0.71 (SE: 0.07). The Pearson correlation coefficient of the standardized mortality rates from the 2 models was 0.82 (P<0.001; Figure 2). The median difference between RSMRs estimated from the models was 0.0001.
We conducted a sensitivity analysis in which we applied the model developed in the Premier data set to a HealthFacts cohort limited to patients with a principal diagnosis of HF to determine if our inclusion of patients with a principal diagnosis of respiratory failure and secondary diagnosis of HF affected our results. We found that the Premier model in a cohort of patients with a principal diagnosis of HF had a C statistic of 0.76 (95% CI, 0.75–0.76) and a similar calibration curve to the full cohort. The C statistic for the LAPS model was also very similar to the main cohort (0.76; 95% CI, 0.74–0.78) and had a similar calibration curve. Because limiting the cohort to a principal diagnosis of HF excluded a large proportion of the deaths, with more deaths excluded from some hospitals than others (due to variation in coding of respiratory failure across hospitals), we opted not to calculate RSMRs in this limited cohort.
Using a mortality prediction model that showed good performance in a multihospital billing data set composed of inpatients with HF, we calculated hospital RSMRs in a multihospital data set containing information from >50 hospitals' EHRs. When we compared the Premier model's RSMRs with RSMR estimates derived from a clinical model that uses laboratory data, we found that the 2 models produced estimates of hospitals' risk‐standardized mortality that were similar, with a correlation coefficient of 0.82. This suggests that the Premier model could be a useful tool for describing the quality of HF care provided by hospitals in situations in which clinical data are not available.
Our model builds on an existing hospital‐profiling model used for hospitalized HF patients. Krumholz et al developed and validated an administrative claims–based risk‐adjustment model for HF with the purpose of characterizing hospital quality. This model, which is used by the Centers for Medicare and Medicaid Services (CMS) to compare mortality rates across hospitals for public reporting purposes, produces results that are very similar to medical records–based models.10, 11, 19, 20 There are several key differences between our model and the CMS model. First, the CMS model was developed to predict 30‐day mortality rates, whereas ours predicts in‐hospital‐mortality. Second, unlike our model, the CMS model contains all claims in the year before hospitalization. These claims are not available in data sets that contain only information on hospitalizations, such as PDW. Consequently, our model may be more broadly applicable to databases that contain information about hospitalizations but lack outpatient data. The CMS model does not include patients with a principal diagnosis of respiratory failure, and that limits its use to patients with HF as a principal diagnosis. The CMS model was also developed before widespread use of POA codes, so it did not originally include POA acute diagnoses, and POA acute diagnoses have not been added to the CMS model in recent years. In contrast, our model attempts to adjust for presenting severity by including some acute diagnoses that were present at the time of admission (eg, pneumonia or acute myocardial infarction). Finally, our model includes all patients aged >18 years, whereas the CMS model includes only Medicare patients, the vast majority of whom are aged ≥65 years. Despite these differences, our C statistics compare favorably to the CMS model, which had a C statistic of 0.70.11
The validation of a model that can be used to profile hospital quality of HF care has some important implications. Because it can be used with almost any multihospital data set, this model can be used by groups of hospitals that come together to improve quality (“quality improvement collaboratives”) and will allow such collaboratives to identify hospitals that need the most focused efforts. Perhaps more important, our model can be used to identify high‐performing hospitals (eg, Figure 2). It is possible that hospitals with lower mortality rates use quality‐improvement and care‐coordination strategies to standardize the care of patients with HF that may lead to better patient outcomes. Although we cannot use this current data set to identify the strategies that are most likely to improve care, future studies should consider using qualitative methods, such as those described by Bradley and Krumholz,21 to do so. These interviews might include questions about hospital units that specialize in the care of HF patients (including nurses, physicians, and managers with expertise in HF care), use of protocols for early identification of volume status, coordinated efforts to identify patients who would most benefit from intensive care and procedures, appropriate use of palliative care and hospice for patients with end‐stage disease, and interventions to integrate team members.
This study has several strengths. First, we validated the model for use at the hospital level in a separate population with different demographic characteristics. Second, we previously demonstrated the patient‐level performance of the Premier model compared with published clinical HF mortality prediction models.16 Third, our model is one of only 2 clinical prediction models in the HF literature aimed specifically at benchmarking hospitals and is potentially more broadly applicable than the CMS model for the reasons stated. Among published HF models that predict mortality in hospitalized HF patients,22, 23, 24, 25, 26 most contain clinical variables, such as systolic blood pressure and serum creatinine level.22, 23, 25 In contrast, our model does not require clinical data but performed similarly to a model that does.
Despite these strengths, this study also has several limitations. First, our model predicts in‐hospital mortality rather than 30‐ or 90‐day mortality because our data sets do not contain information about postdischarge deaths. Although inpatient mortality is widely used in outcomes research,26, 27, 28 there are concerns that hospitals with shorter length of stay or patterns of discharging end‐of‐life patients to palliative care or hospice might have lower in‐hospital mortality but similar 30‐day mortality.29 This is also an issue because CMS quality metrics use 30‐day rather than inpatient mortality. Second, the lack of clinical data also introduced an additional limitation: We could not determine HF etiology or ejection fraction, except by using diagnosis codes. Finally, our definition of HF was based on a combination of diagnosis and billing codes and does not include clinical data. We used a broad set of diagnosis codes to identify patients (with both HF and respiratory failure as principal diagnoses), but also required the presence of at least 1 HF‐specific treatment early in the hospitalization as a proxy for clinical findings of HF. Although many prior studies and CMS quality metrics use diagnosis codes alone to identify patients hospitalized with HF,10, 30, 31, 32 we believe that using criteria that combine the principal diagnosis code with early therapies is a more specific method for identifying hospitalizations for which the primary issue is HF.16 Furthermore, the high percentage of patients treated with diuretics provides additional evidence that these were hospitalizations for which the primary issue was HF. Slightly fewer patients with a principal diagnosis of respiratory failure received diuretics, but we feel that this is to be expected, given that these patients are the sickest and are most likely to suffer from low blood pressure, acute kidney injury, and other factors that would limit the use of diuretics but would increase the likelihood that they would receive other HF therapies (eg, inotropes). Finally, we used this broad set of diagnosis codes because prior work suggests that there is variation in coding across hospitals in the use of the code for acute respiratory failure, with some hospitals using this code more often than others.33 If we failed to include patients with a principal diagnosis of respiratory failure for a hospital‐level validation, the hospitals that use this code more frequently for their sicker patients with HF might appear to have a healthier cohort of patients with HF.
In conclusion, a mortality model designed to support hospital profiling of inpatient mortality for patients with HF appeared to produce similar results to a clinical model. These data suggest that this model could be useful for comparing quality of HF care across hospitals, especially if using data sets that contain only information from hospitalizations (ie, those that lack prior outpatient data) or if clinical data are not available. This model also allows for identification of hospitals with low mortality rates, creating an opportunity to conduct future studies that examine strategies of high‐performing hospitals for HF care.
The study was conducted with funding from the Institute for Healthcare Delivery and Population Science at Baystate Medical Center. Dr Lagu is supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award no. K01HL114745. Dr Stefan is supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award no. K01HL114631‐01A1. Dr Pack was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award no. KL2TR001063. Dr Lindenauer is supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award no. 1K24HL132008.
We would like to acknowledge, for help with article preparation and submission, Lindsey Russo (BS, public health from the University of Massachusetts, Amherst and master's candidate in the School of Public Health at the University of Massachusetts, Amherst) and Jessica Meyers (BS, School of Public Health at the University of Massachusetts, Amherst). We obtained permission to acknowledge Ms Russo and Ms Meyers. Dr Lagu had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
- ↵Lloyd‐Jones D, Adams RJ, Brown TM, Carnethon M, Dai S, De Simone G, Ferguson TB, Ford E, Furie K, Gillespie C, Go A, Greenlund K, Haase N, Hailpern S, Ho PM, Howard V, Kissela B, Kittner S, Lackland D, Lisabeth L, Marelli A, McDermott MM, Meigs J, Mozaffarian D, Mussolino M, Nichol G, Roger VL, Rosamond W, Sacco R, Sorlie P, Roger VL, Stafford R, Thom T, Wasserthiel‐Smoller S, Wong ND, Wylie‐Rosett J. Heart disease and stroke statistics–2010 update: a report from the American Heart Association. Circulation. 2010;121:e46–e215.
- ↵Centers for Medicare and Medicaid Services. Hospital Compare. Available at: https://www.medicare.gov/hospitalcompare. Accessed January 23, 2018.
- ↵Krumholz HM, Brindis RG, Brush JE, Cohen DJ, Epstein AJ, Furie K, Howard G, Peterson ED, Rathore SS, Smith SC, Spertus JA, Wang Y, Normand S‐LT; American Heart Association, Quality of Care and Outcomes Research Interdisciplinary Writing Group, Council on Epidemiology and Prevention, Stroke Council, American College of Cardiology Foundation . Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113:456–462.
- ↵Krumholz H, Normand S‐L, Galusha D, Mattera J, Rich A, Wang Y, Wang Y. Risk‐adjustment models for AMI and HF 30‐day mortality: methodology. Prepared for the Centers for Medicare & Medicaid Services under subcontract #8908‐03‐02. Yale University; 2005. Available at http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier3&cid=1163010421830. Accessed January 23, 2018.
- ↵Krumholz HM, Wang Y, Mattera JA, Wang Y, Han LF, Ingber MJ, Roman S, Normand S‐LT. An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure. Circulation. 2006;113:1693–1701.
- ↵Lagu T, Pekow PS, Shieh M‐S, Stefan M, Pack QR, Amin Kashef M, Atreya AR, Valania G, Slawsky MT, Lindenauer PK. Validation and comparison of seven mortality prediction models for hospitalized patients with acute decompensated heart failure. Circ Heart Fail. 2016;9:e002912.
- ↵Krumholz HM, Merrill AR, Schone EM, Schreiner GC, Chen J, Bradley EH, Wang Y, Wang Y, Lin Z, Straube BM, Rapp MT, Normand S‐LT, Drye EE. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2:407–413.
- ↵Bernheim SM, Grady JN, Lin Z, Wang Y, Wang Y, Savage SV, Bhat KR, Ross JS, Desai MM, Merrill AR, Han LF, Rapp MT, Drye EE, Normand S‐LT, Krumholz HM. National patterns of risk‐standardized mortality and readmission for acute myocardial infarction and heart failure. Update on publicly reported outcomes measures based on the 2010 release. Circ Cardiovasc Qual Outcomes. 2010;3:459–467.
- ↵Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ; ADHERE Scientific Advisory Committee, Study Group, and Investigators . Risk stratification for in‐hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293:572–580.
- ↵Eapen ZJ, Liang L, Fonarow GC, Heidenreich PA, Curtis LH, Peterson ED, Hernandez AF. Validated, electronic health record deployable prediction models for assessing patient risk of 30‐day rehospitalization and mortality in older heart failure patients. JACC Heart Fail. 2013;1:245–251.
- ↵Peterson PN, Rumsfeld JS, Liang L, Albert NM, Hernandez AF, Peterson ED, Fonarow GC, Masoudi FA; American Heart Association Get With the Guidelines‐Heart Failure Program . A validated risk score for in‐hospital mortality in patients with heart failure from the American Heart Association get with the guidelines program. Circ Cardiovasc Qual Outcomes. 2010;3:25–32.
- ↵Krumholz HM, Hsieh A, Dreyer RP, Welsh J, Desai NR, Dharmarajan K. Trajectories of risk for specific readmission diagnoses after hospitalization for heart failure, acute myocardial infarction, or pneumonia. PLoS One. 2016;11:e0160492.
- ↵Xu X, Li S‐X, Lin H, Normand S‐LT, Kim N, Ott LS, Lagu T, Duan M, Kroch EA, Krumholz HM. “Phenotyping” hospital value of care for patients with heart failure. Health Serv Res. 2014;49:2000–2016.
- ↵Partovian C, Gleim SR, Mody PS, Li S‐X, Wang H, Strait KM, Allen LA, Lagu T, Normand S‐LT, Krumholz HM. Hospital patterns of use of positive inotropic agents in patients with heart failure. J Am Coll Cardiol. 2012;60:1402–1409.