There are several ways that assignments and quizzes are graded and these are applied in different combinations listed below:
- Multiple attempts are allowed, with hard deadlines and soft deadlines with penalty after the hard deadline has passed
- Only one attempt is allowed
- Feedback on a question-by-question basis explaining what was right and what was wrong
- No feedback at all - just a total score - not even an accounting of which question is right or wrong.
- Time limits are imposed (or not)
So (1) could be combined with (4), (2) with (3) and so forth. These are just from the courses that I’ve been in so peer grading is excluded and since there are no writing assignments I can’t comment on those. At this point, I'll just say that #4 really sucks! #3 is great although when it is combined with #1, the student already has the answer since the questions do not change. So in a way this sucks too.
The question that really needs to be asked is whether grades really reveal anything and reasonably minded people will have different opinions on this. What does an A reveal? What if the whole class got As? Does it make a difference? As the discussion in the embedded link points out, grades reveal as much as they can if there is a point of reference. By itself, my sense is that it is meaningless.
Again, Khan Academy is moving ahead of the curve on this front. The two relevant concepts are achievement of proficiency (either full, average or below average would be a starting delineation though Khan isn’t doing this just yet) and adaptive testing. In other words, achievement of proficiency via adaptive testing. Students learn through quizzes, tests, and exams. If they get it wrong an algorithm defaults to an easier question and steps the student back up to the harder question - either presented again or in a different form or even both.
If there is anything to be taken away from this approach is the following: grades do not penalize learning. As it stands now, grades are more of a penalty than a reward or an incentive. If there are two relevant measures that can be used to gauge a student it is these: persistence and length of time to achieve some proficiency level. These two measure reveal more than the grade itself. A student may have worked hard for an A while another one breezes through the class. A potential employer looking at an A cannot tell if the first person is a hard worker or someone just doing well because they are smart but with no work ethic.
Coursera can implement this type of approach to learning. It does encroach into the testing field where companies like ETS and Pearson may start feeling the heat and it isn’t clear if this is something that Coursera will want to get into (depending on the VCs). Unfortunately, Coursera is far away from implementing this approach although I believe the technology and the architecture are already in place.
The same problem that plagues teaching at the college level in the bricks and mortar setting carry over into the online setting. Adaptive testing needs a really large test bank and professors really loathe to do this since research is what they are more interested in. Perhaps it can be assigned to real teachers as opposed to professors and by this I mean the teaching assistants and those not under the publish or perish system. It could even be open or crowd sourced. And perhaps more importantly it forces teachers to think about proficiency in certain concepts - not just the course itself. In order for adaptive testing to work well the concept has the well defined so that the software can step back and present another relevant question. It is easier at Khan Academy since they focus on basic skills but much much harder at the college level.
It would make learning a little bit more fun and take away the feeling that getting a question wrong on a quiz is a penalty.