This past year, when revenue leaders were asked to rate how much of a focus was applied to various initiatives for the year ahead (on a scale of one to 10, with 10 being the most important) staffing as well as productivity and compensation analyses received an average score of 7. And this is not just for frontline staff, but for a wide variety of roles and levels. For instance, the question was recently posed through HBI’s Analyst Advisory service—how do you measure performance of the individuals measuring everyone else’s performance?
Based on previous HBI survey data, more than half of providers have a centralized quality or education department for their revenue cycle, while roughly a quarter embed trainers or auditors within separate revenue cycle functions. While there does not appear to be a definitive answer or best practice quite yet, there are two main ways organizations are currently attempting to gain insight as to their impact on the organization: measuring indicators related to their objectives, and surveying the staff they interact with.
Measuring Project-Related KPIs
While these departments may have recurring duties, such as performing a certain number of audits per week or month, research shows that they are often looking to identify specific education needs, provide resources or training on those items, and measure the outcome thereafter. In the example of a Revenue Cycle Education Team at Community Health Network in Indiana, seven specialists initially focused on standardizing point of service collections training across all sites in addition to regular onboarding duties. After performing more than 40 collections training sessions in one month, they found that the percent of point of service collections to total collections increased from 0.59% to 0.75%.
This health system is also tracking how many courses are being offered, any growth in the number of courses, and the number of employees trained. As they have gotten involved in more specialized training initiatives, the organization found A/R days decreased to below 40 days, A/R over 90 days fell below 20%, and the percent of all ED denials related to eligibility decreased.
A revenue cycle leader at another Midwest health system with more than 4,000 beds, which last had seven quality auditors in addition to a supervisor, echoes this sentiment. “When we have things identified by our quality auditors as a need for reeducation or new education or initiatives, we first audit to see if we’re meeting criteria, but then we’ll go behind with another audit to see if we were able to achieve what we set out to.”
This same health system also provides audit results to department managers through SharePoint, and managers are given the ability to “approve” audits after discussing findings with their staff. Considering this practice, measuring the number of times an audit is not approved or perhaps challenged could be another way to monitor performance. Auditors are expected to complete end-of-week summaries with the trends they’ve observed, logging those in SharePoint as well. Leadership can then go back and determine which trends have or have not been addressed.
From the added perspective of a nearly 1,000-bed Philadelphia-based health system—which had two trainers and two quality assurance coordinators within a quality assurance and training department largely focused on patient access—it is expected that QA coordinators review one day of work per FTE per week. In addition to this, they regularly evaluate wait times, POS collections, and satisfaction scores.
The revenue cycle leader at the aforementioned California health system will also hold monthly meetings with department managers and quality department representatives to talk through whether criteria audited continues to meet their needs. Similarly, the education leader is in the process of launching a training video library where the department will be able to track how long videos are watched, how many people have viewed them, which people have viewed them, and a general rating of relevance or helpfulness. This will inform if the organization’s instructional designers are being effective and if staff are adhering to expectations.
This is a practice that can be adopted more generally as well—allowing staff to answer a short survey on whether the audit or education was helpful, relevant, and perhaps whether the educator was courteous and helpful.
While these type of departments continue to be implemented and evolve, so too will their duties, expectations, and performance standards. In the meantime, leaders may be able to learn from their peers how to best ensure that education and/or quality staff members are truly driving improvement within their areas of focus.
Do you have any follow up questions after reading our post? Complete the form below and a member of our team will follow up in the near future!