The Math Medic Assessment Platform (MMAP) is a tool for teachers to be able to deliver high-quality, mathematically rich questions to students through informal assessments (homework) and formal assessments (quizzes and tests). The MMAP is a resource teachers can use to assess their students’ learning and understanding of ambitious content. While questions are *delivered* to you and your students digitally, we’ve stopped short of building a comprehensive online assessment and grading system. Why? **The problem with online assessment and grading systems is that they often run counter to the pedagogical goals we hope to achieve!**

So if you’re wondering why we don’t have certain features you may be used to with other online platforms, this one’s for you! Below we’ve provided our perspective on some of your most common questions.

## Why don’t we use auto-grade?

**We want students to do more than just calculate!** The biggest reason we have chosen not to use auto-grade in the MMAP is because it greatly limits the kinds of questions we can ask of students. Only multiple choice questions or questions with single numerical answers can be graded by a computer. This prevents us from asking students to describe, explain, consider what if, critique, justify, interpret, show, synthesize, or create. Fostering conceptual understanding and higher-order thinking requires these more cognitively demanding prompts (check out Anderson et al.’s revised Bloom’s Taxonomy).

**The myth of instant feedback:** The most common argument for auto-grading is that students benefit from instant feedback. When students can quickly see whether they are right or not, they can self-correct and get back on track. Makes sense, right? However, the research suggests that this process is quite a bit more complex. In a 2007 article by John Hattie and Helen Timperley published by the American Educational Research Association, the authors conclude that ““feedback that strictly focuses on whether student answers are correct…are not viewed as useful for increasing students’ mastery nor their self-efficacy for the task at hand.”” If incorrect answers are simply marked and no explanation is given and no reflection is done, the ““feedback”” becomes merely criticism and learning does not happen. The *quality* of the feedback, the type of feedback, and the way feedback was delivered greatly influenced the effectiveness of it. It is also clear that when students repeatedly submit thoughtless responses until they hit upon the correct answer, they are not actually reasoning about the solution or learning from their attempts. In fact, the availability of infinite attempts may actually reduce students’ efforts rather than encouraging persistence and a growth mindset (Khanlarian and Singh, 2010).

**Teachers need to see their students’ work.** Teaching is demanding work, and anybody would be incentivized to find efficiencies in their work day. Technology can be an excellent tool for this, but an auto-grade feature falls short in achieving the main purpose of assessment: communicating to us where students are at in their learning. Even summary reports of class averages and commonly missed questions cannot identify strengths and weaknesses in students’ reasoning skills (since no reasoning is being assessed, just an answer) and they lack specificity in pinpointing *why* a student got an answer wrong. Did they make a small calculation error? Did they read the problem wrong? Did they completely miss that day’s lesson? This robust knowledge of where students are at in their learning is exactly what is needed to move students on the learning continuum. Joe Feldman writes in ““Grading for Equity”” that ““teachers use and even depend on student response on homework to modify instruction and address students’ errors and misconceptions befores students take the summative assessment.”” We can not bypass the teacher when it comes to assessment!

**What is math class all about?** The Math Medic Assessment Platform is designed to be used alongside Experience First, Formalize Later (EFFL) lessons enacted in class. Thus, the same teaching philosophy applies. We care about students’ *reasoning* more than their *answers*. We care about students’ *understanding* more than their ability to memorize something, execute a procedure, and get a question right. How and what we assess must align with what we value! In a 2017 NCTM post, Adam Sarli writes that ““shifting the focus in math class away from answers and toward methods has huge implications for student learning. It prompts teachers to plan lessons around deep mathematical ideas and to ask questions that get students’ reasoning in focus. It encourages students to develop or try new strategies. It can even get students asking their own questions and justifying conjectures that hit at the heart of mathematics.”” The kinds of questions, then, that we ask of students on our homework assignments, quizzes, and tests, and the kinds of responses we expect from students, are simply too complex to be entered into a single text box.

## Why can’t students submit answers digitally?

For many of the same reasons, we don’t have students submit their work electronically, even if it is graded by the teacher. Here’s why:

**Typing math is hard!** The use of equation editors presents an entirely new skill for students to master, and can diminish their willingness to communicate their reasoning and use proper notation. While typing is often the faster and preferred method in other subjects, the unique nature of mathematical writing makes having to type rather than handwriting a considerable obstacle.

**Research suggests handwriting is more effective for learning than typing.** While the vast majority of research is centered around how students take notes during class, not how they submit homework, the main conclusion stands that typing results in shallower processing, and in one study was shown to lead to worse performance on conceptual questions on assessments (Mueller & Oppenheimer, 2014).

**Writing out answers by hand values the process, not just the final answer.** Because students rarely communicate their whole thought process when submitting answers electronically, the emphasis tends to be on the student’s answer, not his or her reasoning. From conversations with our own students, we have gleaned that students perceive digital submissions as being more high-stakes because they are committing to a locked-in answer, rather than informally working through a problem. Because we value students’ rough draft thinking, we actively look for ways to make math homework and assessments more informative than evaluative, and hopefully reduce students’ anxiety along the way.

**Additional reading: ****The Problem with Giving Math Tests Online (Edweekly)**

References:

Anderson, L. W., Bloom, B. S., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing. Longman.

Feldman, J. (2019). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin, a Sage Publishing Company.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. **https://doi.org/10.3102/003465430298487**

Khanlarian, C., Shough, E., & Singh, R. (2010). Student perceptions of web-based homework software: A longitudinal examination. Advances in Accounting Education, 197–220. **https://doi.org/10.1108/s1085-4622(2010)0000011012**

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard. Psychological Science, 25(6), 1159–1168. **https://doi.org/10.1177/0956797614524581**

Sarli, A. (2007, September 25). Math Is a Subject with a Right Answer; the Goal Is to Get It. Nctm.org. Retrieved November 3, 2022, from **https://www.nctm.org/Publications/MTMS-Blog/Blog/Math-Is-a-Subject-with-a-Right-Answer;-the-Goal-Is-to-Get-It/**

## Comments