A Point in Time Assessment (PITA) is an increasingly popular alternative to tracking attainment and progress in a levels-like, linear fashion.
Unlike a linear model whereby pupils are expected to progress through a number of steps/points along a ‘flight path’, in a Point in Time Assessment, learning is assessed against what has been taught to date and learner’s achievements are compared against the expected levels of understanding and competencies relative to that ‘point in time’.
Where a learner is deemed to be meeting expectations, their attainment is graded as ‘Expected’ (or a similar term of the schools’ choice). Learners who have achieved more are graded as currently exceeding expectations and those attaining less will be graded as below expected to vary degrees (once again, using terminology the school chooses). Point in Time Assessment works most effectively when schools have a clear sense of what they expect of their learners and of how this changes through the year.
Progress is measured by comparing Point in Time Assessments over time. If a learner consistently meets expectations and continues to work at the expected standard, they are judged to be progressing at the rate their school expects. Where a learner moves up a grade, this suggests that they have achieved more than expected between the two milestones; they have made better than expected progress. If they move down a grade, this suggests they have achieved less than was expected and so have made less than expected progress.
Point in Time Assessments can also be used to predict the end of the year and the end of key stage attainment. As long as expectations have been mapped out appropriately, a learner currently meeting age-related expectations in Year 3 can be thought of as ‘on track’ to meet age-related expectations at the end of Key Stage 2. Of course, any such future projection assumes learners will continue to make progress against the school’s expectations.
Making a Summative Judgement
Pupil Asset offers two standard Point in Time Assessment options, the first uses 7 grades and the second uses 9 grades. Schools can personalise the wording of the grades around their own assessment policies, but essentially the structures are:
Schools sometimes wish to simply use the terminology Working Towards, Working At and Greater Depth in their PITA model (3 grades). This is permissible as long as a school is happy to assess all children who are not at the Working At standard as Working Towards. It is for this reason that a wider range of descriptors is usually chosen.
PITA models give plenty of scope for teachers to make professional judgements about the achievement of their pupils. However, in order to make robust judgements, teachers should have a good understanding of the program of study and a clear sense of what they expect of their learners throughout the year. Often this approach is guided by how well a learner has attained the skills taught so far, but may also be informed by test performance and the teacher’s wider understanding of their pupils: Is their current attainment sustainable? Do they retain skills well? Are they motivated? Have they had additional support? Do they readily transfer what they’ve learnt to tests?
Some schools wish to give additional guidance to teachers in making summative judgements which can help to create a consistent approach towards assessment (thank you to Wicklewood Primary, Norwich for sharing this example). This can be done qualitatively, by describing what a pupil needs to be demonstrating to be working at each grade, or in a more statistical way, using the idea of a Weighted Percentage. Weighted Percentages are calculated from the formative assessments teachers input into Pupil Asset and are an indication of how deeply a learner is acquiring the relevant objectives.
Using Pupil Asset Clumps to support ongoing PITA assessment
Schools can break each program of study down into teachable units, often defined by the school’s terms.
In some cases, this has already been done as the school follows a commercially available scheme of work (e.g. White Rose Maths, Hamilton schemes).
Once a unit has been formatively assessed, schools use benchmarks that equate different levels of attainment to different grades. This can be descriptive, such as ‘a learner should have achieved most of the skills taught so far to the expected standard to be graded as Expected ‘. Another common practice is to say that ‘a learner must achieve a certain percentage of the skills taught to a required standard to qualify for the Expected grade’ (e.g. 90% at Working At or higher). As subsequent units are taught, new skills are incorporated. Learners need to maintain the percentage of skills that they have achieved to the required standard in order to retain their previous summative judgement.
Where schools complete their formative assessment within Pupil Asset, this approach can be set up on the DNA Ticks page. Firstly, each age-related expectation needs to be divided into teachable units. Within Pupil Asset these units are called Clumps (click here). Whilst the first Clump will contain the skills taught in the first unit, subsequent Clumps should contain the new content and previous coverage. In this way, it is possible to apply a benchmark to all skills that have been taught up to that point.
Using the Display Options on the DNA Ticks page, teachers can isolate a specific Clump and apply their benchmarks to each learner’s attainment:
a – DNA Strip shows each learner’s achievement in just those skills taught to date (via a Clump);
b – The number (and %) of skills that have been formatively assessed as meeting (Working At) or exceeding (Greater Depth) age-related expectations. Where schools use percentage benchmarks, this can indicate the appropriate grade. The percentage of skills that Megan has attained at the expected standard (or higher) is 90.5%;
c – Teachers input summative judgements on the right-hand side of the DNA Ticks page by clicking Edit Results and finding the appropriate grade. In this school, Megan’s cumulative percentage of 90.5% reaches the benchmark for achieving Above expected.
Measuring Attainment
To review the attainment of a class, set or group of pupils, navigate to the Results > Results page and select the learners and subject you wish to analyse from the Filter.
Pupil Asset uses a range of colours that compare learners’ attainment against school expectations. Light green indicates that a learner’s attainment is expected for that time of year. Colours above green shows degrees of attainment above expectations and colours below green, degrees of attainment below expected.
Measuring Progress
Within the standard Point in Time Assessment options Pupil Asset offers, progress is calculated using the Attainment Colour Bands Difference measure and a Value Added principle.
a – Grades in the black box show that these learners have made expected progress over the range in use. This is indicated in the Attainment Colour Bands Difference column on the right-hand side of the screen by the green bullseye icon.
b – This pupil has made better than expected progress, having progressed from Below Expected to Expected over the time frame. This is shown by the blue upwards arrow and the +2 bands notation, indicating that the learner has progressed through two grades.
c – This pupil has made less than expected progress. They were Above Expected at the end of Year 5 but were only attaining at Expected by the Spring of Year 6. This is shown by the orange downwards arrow and the -2 colour bands notation, indicating that the learner has regressed through two grades.
d – Value Added: A VA of 0 represents expected progress. The Legacy-style VA expresses this Value Added scores such that 100 is expected progress, in line with old-style, RAISe Online reporting arrangements (the option to add or remove this column is located under Admin > School Options > Tracking Options > Progress ).