Guest Blogger: Accessible Survey Instruments

Picture of blog author, Chanelle White

You’ve probably thought a lot about WHAT you ask students when you are assessing learning outcomes. Perhaps you’ve never considered HOW you ask those questions? I’m talking specifically about universal design or the concept that making things accessible benefits everyone. If you have an assessment that is
sent to students, it should be accessible for all. Many offices use surveys ranging from home-grown Microsoft Word documents to sophisticated tools like Formstack.

Perhaps your college or university is currently re-evaluating all of the surveys you use. Perhaps you’re frantically googling universal design (#tbt to me googling “snappy casual” last month in panic mode). Most likely you’re somewhere in between. Here are some considerations as you develop your survey instrument.

Barriers

I can identify 3 barriers: education (or lack thereof), fear, and money.

Education

Perhaps your staff isn’t educated in the need for accessible design, Americans with Disabilities or Rehabilitation Acts, or civil rights (but seriously). Research it yourself, take a class, develop a relationship with a campus partner, and challenge yourself to be an advocate for students with disabilities!

Fear

Listen, I’m not an expert in accessible survey instruments nor am I a computer programmer. I’m just a person who wants to help others and break down barriers in this complex educational system. Sometimes you need to cast fear aside: jump in head first, get your hands dirty, and learn something new.

Money

When in doubt, money is always the problem, right? But, you may be thinking, I don’t have any money to make these compliant. Or, my school won’t pay for a new accessible tool. Actually, you can make accessible Word documents. Seriously, click File then Check for Issues and finally Check Accessibility. As long as you’re not doing complicated data visualization, it’s pretty easy. [Disclaimer: If you are doing some fancy things, you may want to get an expert to look at your documents. If we’re talking about a 10 question survey, I think you can figure it out. May the force be with you.]

Assessment Design

Use person-first language.

Bad: A disabled student asked me a question.

Good: Jennifer asked me a question.

This sounds silly, right? Jennifer is a person too and she doesn’t need to be defined by her mobility, disability, wheelchair, etc.

Use headings and be descriptive!

They help students who use screen readers to quickly navigate the page.

Bad: Section I

Good: Personal Information

Use descriptive links.

When trying to find information in an email, website, or Word document, students who use screen readers may tab from link to link to quickly find information. If you bury the link to your survey in an email, students may give up and not take the survey!

Bad: Click here to take my survey. [Not a real survey]

Bad: Take my survey here, http://www.reallylongsurveyurlthatnostudentusingascreenreaderwantstolistento.com [Definitely not a real survey]

Good: Please take the Oh No It’s an Assessment Blog 2016 Survey. [Still not a real survey]

Use alternative text.

Does your survey include a bunch of images? Data visualization is awesome, but we need to make sure those images are conveyed to users with screen-readers. Adding alt text is often as simple as editing the picture in the format mode and adding an alt text or right-clicking on the image to type a description of the image.

Keep it clean.

Bad: Attention students: Take my really awesome survey to win $50!

Good: High contrast text (dark blue or black on white), easy to read fonts, and no flashing buttons or images, etc.

Tools

Screen Readers

NVDA (free) or JAWS (costly) are two screen-reader tools. These are text-to-speech screen readers that many students who are visually impaired use. The biggest disadvantage to using these tools are that they require a lot of practice to become comfortable using them. If you personally are not familiar with these tools, perhaps you could strike up a conversation with someone in IT services, web development, or disability resources? You might learn something and have a new colleague across campus.

Accessibility Checkers

WAVE

Free web accessibility checker. Easy to use because you simply paste your URL in the search box and BOOM- results.

Total Validator

“Total Validator is an HTML validator, an accessibility validator, a spell checker, and a broken links checker all rolled into one tool, allowing one-click validation of your website” (stolen directly from the Total Validator website). You must download a desktop application. They have a free version and a pro version.

I hope this brief overview gives you somewhere to begin checking accessibility (a11y for short) in your survey instruments. This post was not sponsored, but if you want to sponsor it, please contact my attorney (since I didn’t win the #powerball).

 Chanelle White is the Senior Academic Advisor in the Farmer School of Business at Miami University. Please leave your comments below or feel free to contact Chanelle via email at chanelle.white@miamioh.edu.

Guest Blogger: When Assessment and Data Are Too Much of a Good Thing

Shamika Karikari photo headshot

I love coffee. Strong and black. Coffee does something to my soul when I drink it. The smell of roasted coffee beans and steam coming from my favorite mug brings a smile to my face. Beginning my morning with a hot cup of coffee is sure to set a positive tone for the rest of the day. I love coffee. Then something happens. It’s 3 o’clock and I realize I’m on my fourth cup of the day. During this realization, I begin to notice my higher heart rate, funny feeling in my stomach, and that I’m a bit more energized than what is good for me. The truth is I had too much of a good thing.

 

coffee mug with quote about courage

What is your “too much of a good thing”? I’m convinced we all have it, whether we want to admit it or not. Nowadays it seems assessment and data has become one of higher education’s good things that we have too much of. I want to be clear; assessment and data are necessary in higher education. Both assessment and data are good, just like coffee. However, when we have too much of it and do not use it effectively, this good thing turns into something bad. I see this most often show up in three ways that I will describe below.

  • Quality over quantity. Assess more and have more data has been the message given to many higher education professionals. More isn’t inherently bad, but it also isn’t always necessary. When we expect professionals to assess more, are we equipping them with the tools to build effective assessment tools? Are we being thoughtful about targeting what we assess instead of assessing everything? Do we consider survey fatigue? We must consider these questions. Creating fewer effective assessment tools that provide rich data instead of conceding to the pressure to assess everything will serve professionals well. Switching the focus to quality over quantity is a shift higher education must consider.
  • Dust filled data. When we leave something in a corner and don’t attend to it dust will collect. The same happens with data in higher education. When we conduct multiple assessments we have data that is filled with dust because we do not do anything with it. Because most of our data is stored electronically we don’t see the dust, but it’s there. It’s not enough to say we did an assessment. We must go a step further and use the data! We must analyze the information we’ve collected, share it with folks who need to know, and adapt a plan for how the data will be used. When we do this, our assessment becomes purposeful. When we do this, our investment in that specific assessment is justified. When we do this, our colleagues and students are best served. What timeline can you set yourself to avoid dust getting on your data? What data currently needs dusting off?
  • Over our heads. Some higher education professionals have done a great job assessing in effective ways and utilizing the data collected. However, the dissemination of data is over our heads. The pressure professionals feel has turned into the need to create 30-page analysis of data. What happened to one-page summaries? When will we use technology to disseminate our data? How can we make the analysis and presentation of the data interesting, so people want to read and use it? These are all questions we should be asking when considering the dissemination of data. I have found infographics to be a highly effective way to disseminate information in an accessible way. Making changes to better share our stories is beneficial and necessary.

Assessment is a good thing. Making data driven decisions is a good thing. We know this to be true. To ensure it doesn’t continue as too much of a good thing, professionals must consider the implications of the current way we do assessment in higher education. The survey fatigue students experience, the pressure to have data when making any size decisions, and the expectation that we assess everything under the sun have clouded the goodness of assessment. How are you doing with quality over quantity? What data needs dusted off in your office? How can you make data accessible to all? Considering these questions will get you one-step closer to keeping assessment good. Because remember, like my fourth cup of coffee in the afternoon, you want to steer clear of having too much of a good thing.

Mika Karikari is currently a doctoral student in Student Affairs in Higher Education at Miami University as well as an Associate in Career Services. Additionally, her professional background also includes academic support, residence life, and new student programs. You can follow Mika on Twitter @MikaKarikari or email her at johns263@miamioh.edu.
  

Guest Blogger: Incentivizing the Residential Curriculum

IMG_0466 (3)A new paradigm has emerged in residential education in recent history. This paradigm, referred generally as the “residential curriculum” approach flips the script on how we view the role of residence halls in the life of the university. While the curriculum approach is newer, the concept of learning happening outside the classroom is not, necessarily. The curriculum approach extends the concept of outside-the-classroom learning and utilizes the residence hall setting as a laboratory of student learning.

The traditional approach to learning in the residence halls can be summarized in the programmatic approach, where student staff (e.g., Resident Assistants or RAs) lean on active programs which could be educational in nature, but in many cases are purely social. Now hear me out—I am not against social programming. Social programming is necessary and needed to build relationships among residents towards the development of a community of depth.

The new curriculum approach takes some cues from our friends in academic affairs to start primarily with learning outcomes (i.e., what we want residents to learn). Once the learning outcomes are determined, it is necessary to identify various strategies which could help you achieve those learning outcomes. After the strategy is utilized comes the last, and most perplexing processes, of assessing if the learning outcome has been met, thereby assessing if student learning has taken place.

I’ve been grateful to work with learning-outcomes based models at three different institutions, now in my eighth year working in residence education. And at all three institutions I’ve observed the development of learning outcomes and strategies, but found the last step of assessment continuously perplexing to student affairs professionals in these contexts. We’ve mastered assessment of student satisfaction, but have found it much harder to quantify, or qualify, student learning. Why is that?

There are several dilemmas or barriers I’ve encountered in attempting to gage student learning. First, many times our student staff are the ones who are carrying out our curriculum. In their role as peer educators, can they effectively assess the learning of their peers when this is something student affairs professionals have been trained to do? Second, time. It is hard enough to get students to attend an educational program, and adding on an additional “last step” of an assessment survey can be asking a lot of our students who have volunteered their own time to attend our program. What’s the incentive to complete the survey? For many students, there isn’t one. Third, many of our attempts at assessment often fail and lean more toward anecdotes rather than valued evidence of learning. Wouldn’t it be great if, in a perfect world, we could have students stick around for a focus group to ask pointed questions which would help us to illicit if, in fact, learning took place?

Recently, I’ve been excited and grateful to join the Housing & Residence Life team at the University of Dayton in Dayton, Ohio. One of the decisions that played into my interest in applying for and eventually accepting my current position is the work my office is doing to promote student learning in our on-campus communities. Dayton is a highly residential campus (78%) and where students live is a central, even defining, experience to students. In an effort to better leverage this affinity for living on campus, we sought out to incentivize student learning in August 2014, a year before I started working there. We incentivize student learning by making the strategies of our residential curriculum (i.e., community meetings, one-on-ones, roommate agreements, programs) worth 1 point. The more points students accumulate, the greater chances they will receive favorable housing for the next academic year. We no longer have a housing lottery. Instead, we track student points by participating in opportunities for student learning, and apply those points to our student assignments process. This puts students in the driving seat of their housing assignment process. The main dilemma this could pose is something I thought about immediately when interviewing for my current job: Are students really going to programs and “engaging in learning” because they want to, or because they just want the points? What I began to realize is that the student’s motivation for attendance isn’t really what we care about. What we care about is the experience and, hopefully, the learning that takes place through their attendance and participation in our learning opportunities. BUT how do we gage if that learning is actually happening? Hence the dilemma. What we may have on our side is the dangling carrot of attaining points. Could we ask students to participate in their learning experience AND then complete a short “assessment of their learning” in order for them to attain their points? This may enable us to collect some valid data which could help us to demonstrate that students are learning in their residential environment, regardless of why they are there.

Matt Kwiatkowski is the Assistant Director of Residence Life at the University of Dayton in Dayton, Ohio. Please leave your comments below or feel free to contact Matt via email at mkwiatkowski1@udayton.edu. 

Will You Be a Guest Blogger?

2015-03-14_OhNoLogo22-abby32015-03-14_OhNoLogo22-mark3When we started Oh No our hope was to have one LARGE conversation about assessment. Thusfar, it’s mainly been us talking to ourselves – which is fun but not achieving our goal.

We want to expand the conversation about assessment in higher education, and the best way to do that is to invite creative, innovative professionals to help take the conversation further. We have lots of smart professionals in our lives already who are doing amazing things in various areas of higher education (see some of them below!).seal

mac n joes

These friends of ours (and others who we don’t even know yet [i.e., hopefully YOU!]) will be adding their perspective in the coming weeks.

We’d love for you to add your voice and fill in the gaps that we are missing. If you’re interested in adding to the assessment conversation we’ve started, let us know by filling out the form below.

Sending you much assessment power, 

Abby and Mark
mark and abby