Dana Jayne Linnell

View Original

Dissertation: Overview of the study

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework

  2. Defining evidence use

  3. Overview of my dissertation study: sample, recruitment, & measures

  4. Question 1: To what extent are interpersonal and research factors related to use?  

  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?

  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 


This study is a good example of working with what you end up with. The original methods of this study failed miserably, and we had to shift the methods so at least we ended up with something.

Originally, I sought to recruit practitioners through researchers and evaluators asking their practitioner partners. That would also aid in having actual partnerships represented. Wow did that fail miserably! In the end, I ended up with about 11 paired partners, but that was not enough given the methods I used.

Fortunately, I recruited through both the American Evaluation Association (AEA), and American Educational Research Association (AERA), as well as a bit through existing RPPs online and through social media. I ended up with roughly even numbers of researchers (n = 94), evaluators (n = 116), and practitioners (n = 82) after a meager ~13% response rate and roughly 50% completion rate.

The survey in this study went through the Questionnaire Appraisal System and cognitive interviews (n = 6) for pretesting. If you’re interested in what changed as a result of both those processes, check out the appendices in the full dissertation.

The survey first entailed asking participants whether they primarily identified as a researcher, evaluator, or practitioner in the particular partnership work they would talk about. Definitions were provided so it was clear what I meant by these terms. Next, to prime participants to think about their partnership, they were asked to provide some demographic information about their partnership, including sector, location, number of members, how long it had been together, and the purpose of the partnership.

Third, they were provided three scales on evidence use: one on instrumental use using a mixture of items from three separate questionnaires, one on conceptual use by NCRPP (2016), and one on process use that was a mixture of items from two separate questionnaires. Then participants completed scales on interpersonal factors: relationships (High Quality Work Relationships Scale by Carmeli et al., 2009), communication (combined scale), commitment to use (self-created), cooperative interdependence (adapted from Johnson & Norem-Hebeisen, 1979), and stakeholder involvement (adapted from Weaver & Cousins, 2004). Last, they completed two scales on research factors: relevance and rigor, both of which were self-created.

Finally, participants responded to more personal and partnership demographics, such as their position in the partnership, level of involvement and decision-making, education level, and more. The survey took participants a median 23 minutes to complete, making the survey a bit too long and likely contributed to the low participation and completion rates. 

Most participants worked in education (57%), in the United States (88%), and had been in partnership for 5 or more years (41%). The primary purpose of the partnerships was either to conduct and use rigorous research (61%) or impact local improvement efforts (26%).

Want more details on the internal consistency of the scales, the items in the scales, or the sample characteristics? Check out Chapter 2 and the appendices in the full dissertation.