Progressive Improvement Assessment (PIA)
Progressive Improvement Assessment (PIA)
Ted M. Coopman
I have used PIA in 100w (3 times) and in a writing course at another institution since fall 2010 (7 times) and find the technique extremely effective in motivating students to improve. Administering PIA has up-front costs earlier in the term but reduces grading time (and frustration) as submissions improve. Fewer technical errors = faster grading.
Overview
Progressive Improvement Assessment (PIA) is a method for reducing repeated technical errors in assignments. It creates escalating costs to a students unwillingness to improve based on instructor feedback. For example, a student makes an error in APA inline citation format such as failing to list a page number when quoting a text. The first error results in a nominal point loss. The error is flagged and a correct example is provided or directions to where the student can find the rule or an example.
- The next time that error occurs it will cost 3 points.
- The third time it will cost 6 points, then 9 points and so forth up to the total value of the assignment.
If a student improves (such as correcting a previous error) over two assignments points lost (on the PIA penalty) on that particular issue will be partially or fully “rebated.” That is, added back in to students overall point score (rebated points go in the Extra Credit column).
PIA requires maintaining a record of student technical and other errors and to check current assignments against it when grading.There are several methods for accomplishing this sort of bookkeeping. I use a table into which I enter the errors and then check post grading for repeated mistakes.
Bottom line – it is extremely effective in improving students writing.
Pilot Study Spring 2011
Introduction
I teach an online writing course, Comm 463 Global Media, in the Communication Department at the University of Louisville. Writing courses require students to write 2400 words. I first taught this course in Fall 2009 and have just completed teaching it for the 4th time. I have found a tendency in students (for this course and others) to only improve on submitted work if it helps them to achieve their desired goals for the course (i.e., the grade outcome she/he wishes). If students are happy with a “C” and get a “C” on an assignment, often they will produce the same level of work in the expectation that this equals a “C” in all circumstances with the end result being a “C” in the course. Students tend to be very instrumental in the way they allocate their time and resources. This appears to be true for average as well as many excellent students.
In an attempt to improve writing competency, I moved to multiple smaller writing assignments that gradually increase in complexity over the course of the term. My rationale was that flagging errors and issues would result in the elimination or the reduction of such errors as the semester progressed. While this was the case for some students, many students continued to repeat the same errors over and over. This was true even as their assignment scores grew steadily worse. Near the end of the fall 2010 term I let my students know that any repeated errors that showed up on the last two assignments would result in a returned assignment that had to be corrected within 48 hours with 10% point penalty. Compliance jumped immediately and only two students had to resubmit assignments.
This result provided an interesting insight into Alum and Roksa’s (2011) Academically Adrift study in which the authors argued that undergraduates often are not really learning. While there are many structural and cultural constraints that contribute to this problem, it seemed apparent that a certain mercenary instrumentalism on the part of many students played a role. For me, the issue was how to motivate students to improve on their basic writing competencies, especially technical issues such as spelling, grammar, and following APA style guidelines. While technical issues are only one aspect of good writing, these basic errors often absorb instructor time and interfere with working on more complex issues. Since technical elements are fairly easy to identify, this was the area I chose to focus on. Moreover, I had discovered that in many cases there was a direct relationship between poor technical skills and overall writing competency.
For the Spring 2011 semester I instituted a Progressive Improvement Assessment policy in my Comm 463 course. This is from that syllabus:
For each wiki submission, I will give detailed feedback flagging technical errors and making suggestions on improving your writing. I have found that some students consistently repeat the same basic technical errors (even when they are repeatedly identified) not because of a lack of skill or ability, but an unwillingness to take steps to improve technical proficiency. Education is about learning and learning is about improvement. I expect your writing to improve over the semester (actually, I insist on it). Grading will get progressively harder as we get used to each other and you work on these assignments. Repeated errors will cost progressively more points for each assignment. This is how it will work:
A student makes an error (for example) in APA inline citation format such as failing to list a page number when quoting a text. The first error results in a nominal point loss. I note the error and provide a correct example or directions to where the student can find the rule or an example.
The next time that error occurs it will cost 10 points.
The third time it will cost 20 points, then 30 points and so forth up to the total value of the assignment.
I maintain a record of student technical and other errors and check current assignments again it when I grade. This may sound extreme to some, but it is imperative that students take active steps to improve basic skills.
Implementation
I implemented this policy by creating a table that included cells for each student. In those cells I copied the grading rubrics from the bottom of each assignment and added annotations on the issues students needed to work on. I also provided extensive inline comments that specifically identified issues and often offered correct examples. These examples are from the same student. Note that this earlier assignment was worth less (85 pts.), while later assignments were worth 100. A rubric for an assignment looked like this:
Wiki #2
Following Directions: ok
Technical Elements: APA again.
Improvement: repeated same APA error -10
Coherency: Try reading your work out loud to gauge flow/smoothness/coherency.
General Comments: [371] [name], complete, but this reads a little rushed. The last paragraph seems tagged on and could have benefited by some supporting evidence. It is just comparatively weak. Take a bit more time to flesh these out.
Score: 72-10 = 62
I clearly identified the error and the “cost” to the student. Some students still failed to correct errors and the number of points deducted increased at a rate I had not expected.
Wiki #4
Following Directions: ok
Technical Elements: APA still (-30)
Improvement: continued APA – why?
Coherency: not very, this reads like a collection of historical facts randomly assembled to meet the assignments technical requirements. Needs organizational work, try outlining. Try reading your work out loud to gauge flow/smoothness/coherency.
General Comments: [671] [name], first off these repeated APA issues are inexcusable and increasingly damaging to your grade. Review the workshop examples and/or get assistance. This reads like a fact salad. There is very little holding this together. The overarching topic choice is fine, the referenced material could work, but there is no narrative flow. I am not finding a clear thesis with supporting evidence reaching toward a conclusion.
Score: 75 (-30) = 45
The damage to some students’ grades reached the point where I became concerned I had overreached – perhaps “too much stick.” To compensate, I offered to “rebate” lost points for improved performance as an incentive. This seemed to have the desired result for some students. For example:
Wiki #5
Following Directions: does not really address question
Technical Elements: fragment; inline APA [what happened? This was correct in earlier wikis]; spelling; Quotes need locations (p. X or para X)
Improvement: APA MUCH improved in references [+10] keep it up to recover more points.
Coherency: You seem to have a consistent problem with integrating sources into you writing. It reads like you write your paper, then go out to look for some sources that might fit. Sources come first – they are there to inform your analysis.
General Comments: [531] [name], this is always a tough wiki. The simple desire for marriage is not an ideology. Central to your ideology could be the traditional nuclear family unit or the institution of traditional marriage and what it stands for in US culture. Review Wise’s discussion beyond this basic quote.
Score: 70+10=80
The rebating of points appeared to encourage some students to continue to improve. This was the student’s final submission:
Wiki #7
Following Directions: ok
Technical Elements: minor punctuation
Improvement: yes!
Coherency: ok
General Comments: [417] [student name], I am very impressed with the improvement you have made this term. You writing has come a long way both in structure and I technical detail. I really appreciate your hard work. You should feel proud of this accomplishment. Good writing skills will serve you well in life. As promised, I am refunding you those final 15 points (they will go in the EC column in My Grades).
Score: 100+15
While some students failed to make substantial improvements over the term, many students’ writing and technical skills improved considerably and they were able to submit almost error free work.
Comparative Results
It is difficult to compare students and courses across terms. Moreover, in addition to implementing PIA I made other course improves such as multiple instructor videos to introduce and contextualize different aspects of the course and a mandatory APA and Writing Workshop and associated quiz. However, the number of assignments, readings, and questions for each assignment remained the same.
In a comparative analysis between the two terms, I looked at the repeated error rate defined as any repeated errors flagged in students’ assignments (does not take into account multiple repeated errors on individual assignments). As of assignment #3:
Fall 2010 N=14 |
Repeated error rate was 85% |
Spring 2011 N=18 |
Repeated error rate was 25% |
As of assignment #5:
Fall 2010 N=14 |
Repeated error rate was 85% |
Spring 2011 N=18 |
Repeated error rate was 11% |
As of (final) assignment #7:
Fall 2010 N=14 |
Repeated error rate was 50% |
Spring 2011 N=18 |
Repeated error rate was 22% |
Overall error rates are down considerably between the two terms. I did not penalize students with punitive point penalties in the Spring 2011 for Wiki #7 so this error rate is based on items that had been flagged for that student previously. A significant limitation is the way I noted errors between the two terms. In Fall 2010 I had to examine each students record to assess if repeated errors occurred. In some cases I had noted a repeated error and in others I had simply noted that a specific error was made. Because of this these numbers are a bit “squishy” and should be taken for a general reference only.
Last, overall final writing submissions were of much higher quality and earned higher grades in Spring versus Fall.
For (final) assignment #7:
Fall 2010 N=14 |
Average score was an 81 |
Spring 2011 N=18 |
Average score was an 89 |
In the final analysis the number of final submissions in the Spring 2011 course that earned 100/100 points (1-2 minor technical errors but with strong writing) was 9/18 (50%) while Fall 2010 was 3/14 or about 20%.
Specific Analysis of Spring 2011
For the pilot PIA course, 11/18 students took at least one PIA enhanced penalty; most between 10 and 20 points. A total of 240 penalty points were given of which 120 were returned as a reward for improvement. It is important to note that two students accounted for 120 (50%) of the 240 total penalty points. Also of interest was that one of these student’s work (whose work was used as an example above) improved greatly and that student earned perfect scores on the last two assignments.
Student Reactions
At the end of the course I ask students to provide (for extra credit) three things they liked and three things they disliked (or suggestions for improvement) about the class. Only three students responded negatively. One stated that he didn’t think any points should be taken off for APA errors and another two remarked that it was too “harsh,” although one admitted he could see the point of doing it.
Conclusions and Recommendations
Obviously, these data are too provisional and limited to draw specific conclusions about implementing a PIA scheme. However, I think that the results of the pilot study warrant further analysis and experimentation with this or similar systems that shift responsibility to students and hold them accountable for objective improvement over the course of a class. Through a system of highly visible incremental penalties and rewards, students appear to respond and take proactive steps to improve their skills.
Based on this experience, I am modifying my PIA design in several ways:
1. Reducing scaled point penalties to 5 points instead of 10. I believe that 10 points of assignments ranging from 85 to 100 points is excessive and that 5 points (or 5% if you prefer) would be as effective.
2. Rebating points appeared to be very effective in reducing student resentment to being penalized. I added this mid-term and would make this policy more explicit in the instructions and any explanatory video.
3. Based on feedback from a colleague, I think students would benefit from a more formalized revision/correction scheme. This would place the students evaluation on an electronic sheet that students would have to submit with each assignment. Much like a revision for a journal submission, students would have to directly address each error and state what steps they have taken to address it. I am hoping this will help encourage students to actively engage in their own learning process as well as reduce avoidance behavior. This revision sheet would carry a point value as well. An added benefit is to further systematize grading and record keeping to reduce instructor workload.
Finally, let me clearly state that this system is compatible with heavy teaching workloads. I teach 5 classes per term, four upper-division (writing course, 2 research methods courses, 1 theory/topic course) and one lower division course.
Change is coming to higher education. Enhanced assessment and accountability will be created by us from below, based on the reality of our classroom experiences, or imposed from above. The latter is what has happened to K-12. PIA and similar systems can provide a basis not only for providing better outcomes but the also the ability to assessing student improvement.
May 16, 2011.