Dana presents at Eval17: Surveying children, using vignettes to train staff, and more!

I am really looking forward to meeting you all at the annual AEA conference, Eval17! I wanted to share with you the details of my various presentations and hope you can make it to any of the ones that pique your interest!

Improving Youth Voice in Evaluations: Strategies for Collecting Better Survey Data from Youth

Friday, November 10 4:30-5:15 in Washington 5

In collaboration with Dr. Tiffany Berry at Claremont Graduate UniversityThis demonstration session will provide examples and recommendations for item-writing, response options, formatting, pre-testing, administration, and analysis for children and adolescents. While many evaluators agree that surveys need to be carefully adapted for children and adolescents to be developmentally appropriate, many are still unclear on effective practices. For instance, some may be unsure of how young is too young for a survey, how to write age-appropriate questions, whether paper or online surveys are better for youth, and what response options should be used. You will leave the session with concrete strategies for developing, administering, and analyzing surveys for children and adolescents.

Using Vignettes to Improve Staff Knowledge about Program Quality

Thursday, November 9 3:15-4:15 in Washington 1

In collaboration with Dr. Tiffany Berry at Claremont Graduate UniversityTraining program staff about what quality means and what high program quality looks like is an important first step towards improving program quality. This presentation explores how vignettes—short stories about hypothetical characters in specific circumstances—can be useful for teaching program staff how to think about program quality and, for organizations with high evaluation capacity, learn how to conduct observations prior to going out “into the field.” Through our work, we used this activity across three different groups of people (two afterschool programs and one group of budding evaluators); their usefulness for teaching them what program quality looks like and preparing them for conducting observations will be discussed. Implications of using vignettes to promote evaluative thinking in organizations and as a strategy to promote continuous quality improvement will be discussed.

How effective are logic models? Testing their understandability, credibility, and more [note]I will unfortunately be unable to attend this session (I am second presenter) due to conflicts with one of my other sessions)[/note]

Thursday, November 9 3:15-4:15 in Washington 4

Presented by Ciara C. Paige with Natalie D. Jones, Darrel Skousen, Nina Sabarre, and Tarek Azzam at Claremont Graduate UniversityDespite the popularity and use of logic models in evaluation, little is known about how cognitively challenging, and credible they are to stakeholders. Through a study conducted on Amazon’s Mechanical Turk (MTurk) platform, various alterations of a single logic model were explored to determine what components (e.g., data visualization principles, arrows, legends, accompanying narrative descriptions) are most important at affecting users’ accuracy of interpretation, response time, mental effort, and perceptions of credibility. Implications for future development in logic modeling and data visualization will be discussed.


I am also the chair for two sessions:
  1. Speaking the Stakeholders’ Language: Lessons Learned When Fostering Contextually Sensitive PracticeGraduate Student and New Evaluators TIGThursday, November 9 8:00-9:00 in Marriott Balcony BI am particularly excited for chairing this session as I was the presenters' class TA. They will be presenting their class evaluation projects in this session and the lessons learned in designing and implementing evaluation projects at an early stage in their evaluation careers.
  2. Encouraging Engagement and Use: Supplementing Traditional Data Collection and ReportingPreK-12 Educational Evaluation TIGSaturday, November 11 8:00-9:00 in Washington 2
Previous
Previous

From Learning to Action: Reflections from Eval17

Next
Next

Can evaluators be the bridge in the research-practice gap?