The purpose of this study was to determine whether a large group dental practice with semi-autonomous clinics, providers with autonomy, and staff representing many professional roles could use a deliberative engagement process to endorse clinic-specific implementation strategies and increase providers’ adherence to the PDA’s pit-and-fissure dental sealant guideline. The rationale was that deliberative engagement is less time- and resource-intensive than existing approaches, which are not feasible in clinical practice. After participating in a deliberative forum, participants did indicate their level of agreement with implementation strategies, and the size of the clinic was associated with how strongly respondents disagreed that their clinic should adopt specific strategies. Following the deliberative engagement process, however, providers did not place sealants on more occlusal NCCLs compared with the non-intervention period. We did not conduct post hoc analyses to identify factors associated with whether a clinic increased or decreased its rate of sealant placement.
The deliberative engagement process enabled clinic leaders to obtain from their providers and staff clinic-specific opinions about both the endorsement of barriers and possible implementation strategies that they could use to guide their decision making. Specifically the deliberative engagement process revealed that there was only moderate agreement with the barriers and little buy-in from providers and staff to adopt implementation strategies. Furthermore, the providers’ and staff’s support for implementation strategies varied depending on the size of the clinic, which provides support for using engagement processes that can elicit clinic-level results. The sources of similarity and dissimilarity across clinics should be identified in future research.
We also examined mechanisms by which the deliberative engagement process may have influenced respondents’ endorsement of implementation strategies and placement of sealants. These included developing an informed opinion, sharing voice, and adopting the clinic report recommendations. We found little change in forum participants’ opinions about the implementation strategies from the beginning to the end of the forum. We found that 70% of the forum participants returned their surveys, which is how they shared their voice with leadership. And we found that clinic leaders did not deploy any implementation strategies, which is consistent with the moderate level of endorsement by providers and staff in the clinic reports.
We sought to determine whether receiving the workbooks and participating in the forums would enable forum participants to develop an informed opinion about the implementation strategies from the beginning to the end of the forum. We observed that the majority of the forum participants indicated the same opinions before and after the forum. We also observed that the views participants shared during the forums were not diverse25. Thus, participants were not exposed to a range of perspectives that might have inspired them to reconsider their opinions. It is possible, however, that participants formed their opinions prior to the forum following exposure to the workbook, a result to which our measure was not sensitive. Alternatively, participants could have become more informed during the forum without changing their opinions. Ultimately, we were not able to address the objective completely. Future research is needed to address this objective.
With respect to the effects of the deliberative intervention on sharing voice, we found mixed results. In the high rate at which forum participants returned their surveys, forum participants and survey respondents expressed ideas to improve their clinics (i.e., shared promotive voice) and also concerns about practices that could be harmful to their clinics (i.e., shared prohibitive voice). This evidence of sharing voice was supported by our qualitive analysis of the forum chats26. In the quantitative analysis of participants’ perceptions of voice on the post-session self-report questionnaires, we found that the participants somewhat agreed that leadership would take their suggestions into consideration and neither agreed nor disagreed that they shared voice. Thus, their perception, on average, differed from their behavior and what we observed. This discrepancy warrants future research.
Our second objective was to observe whether clinic leaders would adopt the recommendations in the clinic reports. In our three-month follow-up interviews, we learned that clinic leaders did not adopt any implementation strategies. This may have been because the reports demonstrated that slightly less than half the providers and staff agreed with the barriers, and close to a third thought there was no need for an implementation strategy. Based on these results, we hypothesize that clinic leaders interpreted the clinic reports as recommending not to deploy implementation strategies. In the absence of the deployment of implementation strategies, it is unlikely that behavior would change. Because the providers did not place more sealants, we were unable to evaluate our third objective, which was to determine whether forming an informed opinion and sharing voice mediated the relationship between experiencing the deliberative engagement intervention and placing sealants.
One strength of the study was that we obtained many process measures, using both quantitative and qualitative methods, which enabled us to evaluate whether deliberative engagement affected the mechanisms by which it is proposed to work. Additionally, we designed the deliberative engagement to target barriers and build on facilitators we identified in our formative work according to the Capability, Opportunity, Motivation – Behavior model of behavior change. In addition, although the denominator for calculating the survey response rate could be all 896 providers and staff (response rate = 28.2%; reported in Table 2), the 680 providers and staff in the intervention clinics (response rate = 37.2%), or the 363 forum participants (response rate = 69.7%), we have confidence that the survey results reflect the perspectives of forum participants with precision27. Finally, we were able to administer the intervention with fidelity.
UBT meetings involve in-person, verbal engagement, whereas, in our study, participants had to use an unfamiliar, chat-based tool (i.e., the Common Ground for Action online platform). Additional practice with the online platform or enabling verbal interaction could overcome this limitation. It is unclear whether the COVID-19 pandemic affected our evaluation of the intervention (see Supplemental File 1). During the time of our study, the clinics experienced staffing shortages and low morale because of the pandemic. However, the barriers preventing guideline adoption predated and were not affected by the pandemic. In response to the pandemic, dentists initially were not allowed to place sealants, but this restriction had ended by the time the intervention began. Additionally, in response to the pandemic, clinic UBT meetings shifted from being in person to being virtual. Finally, as we reported elsewhere, there are a number of potential confounders or limitations in how we measured sealant placement that could have affected our results23. These include possible misclassification of carious lesions and thus errors in the eligibility of these lesions for sealants and lack of information about whether patients were offered sealants but elected not to receive them.
First, it will be important to determine whether the lack of a range of perspectives among forum participants is a feature of colleagues who work together. Second, throughout the study, providers and staff articulated several barriers. More work may need to be done to identify and obtain consensus regarding the important barriers. We would be interested in determining whether a deliberative process could assist with this step. Third, it is unknown whether these results would generalize to providers and staff with less homogeneity or to healthcare providers more broadly. Future research should examine these questions.
In sum, we conclude that deliberative engagement creates resources that dental providers, staff, and clinic leaders can draw on to develop opinions and determine actions. It provided an efficient means to elicit oral health providers’ and staff members’ endorsement of barriers to the placement of sealants and implementation strategies to overcome those barriers specific to their clinics. Deliberative engagement may enable healthcare organizations comprised of semi-autonomous clinics both to elicit the wisdom of providers who practice with relative autonomy and to engage these providers in a process of change. One means by which this may have occurred is through providers and staff sharing their voice. However, deliberative engagement alone is not sufficient to increase providers’ adherence to a clinical practice guideline. As identified above, further research is recommended.
Methods
Study design
The DISGO study used a cluster-randomized, stepped-wedge design28. Because the resources available to clinics vary depending on their size, clinic size may affect the implementation strategies providers and staff endorse. Thus, based on both clinic volume and provider staffing full-time equivalents in 2019, we categorized clinics into tertiles according to size as large, medium, or small. We used the rand function in SAS on a Uniform(0, 1) distribution to number clinics within each tertile from 1 to 5. Clinics were assigned to five clusters defined by number, with every cluster containing a single clinic from each tertile. A study project manager then enrolled clinics by the cluster number. We deployed the deliberative engagement process by cluster, allowing six weeks between each cluster. Prior to launching the deliberative engagement process in the clusters, we deployed it in one vanguard clinic. For the study, the nonintervention period began January 1, 2021, and the intervention period ended December 31, 2021. Each clinic had its own nonintervention and intervention periods. Each clinic’s intervention period began when the providers and staff in that clinic received a summary report of their recommendations following their deliberative forum. The deliberative engagement process was deployed in the vanguard clinic in March 2021 and in the final clinic of the last cluster in October 2021. This project is registered at ClinicalTrials.gov with ID NCT04682730. The trial was first registered on 18/12/2020. and is reported following the “Reporting of stepped wedge cluster randomized trials: Extension of the CONSORT 2010 statement” guideline.
Study site and sample
The KPD program is part of the Kaiser Permanente Northwest (KPNW) integrated healthcare system and provides comprehensive, prepaid dental care services at KPNW-owned facilities to over 250,000 dental plan members in Oregon and southwest Washington. Thus, cost is not a factor for patients. Across all KPD plans, there are no age or frequency limitations for placing dental sealants. In addition, KPD is part of Health Share of Oregon (www.healthshareoregon.org), a large, coordinated care organization that services the state’s Medicaid population in the tri-county Portland metropolitan area.
Staff are employed in two different organizations. Dental hygienists, expanded function dental assistants, and administrative staff are employed by KPD and are mostly unionized. Dentists and denturists are owners or employees of PDA, which contracts with KPD to provide comprehensive dental services to KPD members. KPD engages with its workforce and PDA providers through a labor-management partnership. In the dental clinics, this partnership is manifested in a self-directed decision-making structure called a UBT, which is co-led by a clinic manager and a dentist. The team includes all the members of the work group. The purpose of the UBT is to set clinic goals and address issues affecting performance or the work environment; thus, these meetings may serve as a facilitator to implementation. Each clinic has substantial autonomy to tailor its services and goals to fit the needs of their patients.
We included all clinics that were not dedicated urgent care clinics. The deliberative engagement process targeted general and pediatric dentists, dental hygienists, expanded function dental assistants, and administrative staff in the clinics. Specialists who would not be likely to place sealants, such as periodontists, endodontists, and prosthodontists, were excluded.
The study was approved by the Kaiser Permanente – Northwest Region Institutional Review Board and the University of Pittsburgh Institutional Review Board. All methods were performed in accordance with relevant guidelines/regulations. We were granted waivers of signed informed consent for (a) clinic providers and staff who participate in the stepped wedge deliberative loop forums and (b) collecting and using patient data from the electronic health records and administrative data for study purposes. These waivers were obtained from the Kaiser Permanente – Northwest Region Institutional Review Board.
Deliberative engagement process
Clinic providers and staff watched a 15-minute video introduction to the study and to the deliberative engagement process. Participants then experienced the first component of the deliberative engagement process, access to well-vetted background information about the sealant guideline17. Via email, they received a 27-page workbook providing background material on the two barriers and on possible implementation strategies (see the clinical protocol28 and Supplemental File 2 for a description of the workbook).
Participants experienced the second component of the deliberative engagement process, facilitated small group discussion, and the third component, considering options, about a month after receiving the background information. Clinic providers and staff participated in an online, chat- (text-) based, facilitated small group discussion (i.e., deliberative forum) using the Common Ground for Action platform (Kettering Foundation and the National Issues Forums Institute, Dayton, Ohio)29. This platform is designed for deliberative engagement, which does not require face-to-face interaction30,31,32. The target group size was 6–8 participants. Clinic providers and staff were assigned to specific groups based on their clinic role, so that each small group would have a diversity of roles represented. The forums discussion lasted 1 ½ hours and occurred in place of a standing UBT meeting.
Participants experienced the fourth component of the deliberative engagement process, sharing informed opinions with leadership, immediately following each forum. We gave clinic providers and staff a link to a Qualtrics survey (Qualtrics, Provo, Utah), where they could both record their recommendations to leadership and complete a survey about the process and their perception of voice. Although in the deliberative forum, eight possible implementation strategies were presented, on the post-session survey, we included only the five with the strongest evidence base as determined in our formative research24,28,33. We summarized survey responses by clinic and provided 4-page reports to clinic leadership, clinic providers and staff, and practice leadership, including the PDA Dental Director for Evidence-Based Practice.
Primary outcome
The primary outcome was calculated for each clinic and for each dentist as the difference between the intervention and nonintervention periods in the rates of placing (see Supplemental File 3) or treatment planning sealants for occlusal NCCLs. This represents patient-level receipt of guideline-adherent care. Sealant application and treatment plans for occlusal NCCLs were extracted from the electronic health record. Whether sealants were offered to patients and declined was not documented in the electronic health record. NCCLs were chosen as the primary outcome because baseline rates were so low, there was opportunity for improvement23.
Proximal outcomes
These outcomes included survey respondents’ agreement with the two barriers we had identified24 and the implementation strategies we provided (see Supplemental File 4 and33. These questions were scored on a 5-point Likert-type scale ranging from “Strongly Disagree” (coded 0) to “Strongly Agree” (coded 4) and were followed by free text response boxes for comments and suggestions. Clinic providers and staff answered these questions immediately following the deliberative forum, via the post-session survey. Because clinics could vary in their access to resources and their culture and because these factors could be associated with the survey respondents’ perceptions of barriers and endorsement of implementation strategies, we also examined whether clinic size was associated with the survey respondents’ agreement with the barriers and endorsement of implementation strategies.
Putative mechanisms of action
We conducted exploratory analyses of three putative mechanisms of action: developing an informed opinion, sharing voice, and adopting the clinic report recommendations. To measure each forum participant’s opinions, for each of the eight possible implementation strategies, we compared each forum participant’s ratings (i.e., ActionWeShould, ActionConflicted, ActionWeShouldNot, ActionUnranked) at the beginning of the deliberative forum with their ratings at the end of the deliberative forum. Ratings were not taken prior to the introductory session because participants were sufficiently unaware of the guideline such that they would not have been able to provide ratings until after attending the orientation and receiving the workbook.
We employed two measures of clinic providers’ and staff’s sharing of voice, which we defined as having the opportunity to share one’s views, and two measures of clinic providers’ and staffs’ perceptions of voice. First, we assessed the post-session survey response rate. It was by returning these surveys that participants shared their voice with leadership. Second, we analyzed the transcripts from the deliberative forums using six codes to assess forum participant voice (results published separately26;. An example of one of the codes is as follows: “Suggest new behaviors which are beneficial to my clinic.” To assess perception of voice, we administered the Promotive and Prohibitive Voice scales [21]. Clinic providers and staff completed the scales immediately following the deliberative forum, via the post-session survey. Clinic providers and staff completed the scales twice, first as the scales characterized their deliberative forum and second as the scales characterized past UBT meetings. Each scale has five items and is scored on a 5-point Likert-type scale ranging from “Strongly Disagree” (coded 1) to “Strongly Agree” (coded 5). Fourth, the post-session survey included questions addressing the clinic providers’ and staff’s perception of leadership responsiveness to feedback in the past year and anticipated responsiveness to the feedback from the forum. These questions were scored on a 5-point Likert-type scale ranging from “Strongly Disagree” (coded 1) to “Strongly Agree” (coded 5) and were followed by a free response box for comments and suggestions.
To assess clinic leaders’ adoption of clinic report recommendations, we interviewed a key leader at each clinic, such as the clinic manager, three months after delivering the clinic report. To guide the interview, we developed a set of 15 open-ended questions that included: “Tell me about your experience with the Deliberative [Engagement Process];” “Do you think the forum led to improved strategies and increased implementation of guidelines?;” and “Were there any conversations that took place after the forum, and can you tell me what was said?” For a description of the data analysis, please see the Supplemental File 5. One separate interview was conducted with the Dental Director for Evidence-Based Practice. This interview included six questions asking his thoughts about the summary reports provided to the clinics and any actions taken or planned resulting from those reports.
In addition, we examined fidelity to the deliberative engagement process. Please see28 for a table presenting all the outcome measures.
Power
Power was calculated using the “stepped wedge” package in Stata 1615, based on the approach developed by Hussey and Hughes16. Our null hypothesis was that providers’ rates of placing or treatment planning sealants for occlusal NCCLs would not differ between the nonintervention and intervention periods. Our alternative hypothesis was that providers’ rates of placing or treatment planning sealants for occlusal NCCLs would increase following the intervention. We assumed a Type I error of 0.05, a 3% rate of placing or treatment planning sealants in the non-intervention period, and a 10% rate in the intervention period, which is an increase seen in similar interventions17. Randomizing 15 clinics along with 1 vanguard clinic using a stepped-wedge approach with these assumptions, we had at least 80% power to detect a difference and close to 100% power under some scenarios.
Statistical analysis
The primary end point was calculated as the change in the providers’ rates of placing or treatment planning sealants for occlusal NCCLs from before to after clinic providers and staff were exposed to the deliberative engagement intervention. The PDA guideline includes children, adolescents, and adults. To enable comparison with studies following the ADA guideline, in addition to analyzing lesions occurring in children, adolescents, and adults, we also conducted analyses for lesions occurring in just children and adolescents. Per the ADA, we defined “children and adolescents” as persons ranging in age from 6 to 17 years. The rates, and their difference, were treated as continuous variables. We used a generalized linear mixed model to model the intervention effect on sealant placement while nesting teeth within provider and provider within clinic. We also accounted for secular time trends as a categorical fixed effect (Supplemental File 6).
To examine differences between clinics and between survey respondents within clinics on agreement with the barriers and implementation strategies, we conducted one-way ANOVA for each question to compare the mean scores by clinic size (F-test) and the variance within clinic (Bartlett’s test). We tested one-way ANOVA assumptions using the Shapiro-Francia test for normality and Bartlett’s test for homogeneity. For ANOVA results where the F-test yielded a statistically significant (p <.05) result, we conducted pairwise comparisons using a Bonferroni adjustment for multiple comparisons. To quantify change in opinion from the beginning to the end of the deliberative forum, we calculated a Cohen’s kappa for each forum participant. Ratings were treated as changed if the rating made before the forum differed from the rating made at the end of the forum. We used descriptive statistics to summarize the distribution of the Cohen’s kappas and the voice self-reports. All analyses were conducted using Stata 17 (College Station, TX) and RStudio.
link

