This page describes how to analyse and interpret student literacy data using different assessments.
You can find more information on
Literacy tests and what they assess. This will provide you with information about the different types of literacy assessment tools that are available and the data they will help you to collect.
Locating students on a developmental pathway
To effectively plan a program of learning for a student with learning difficulties, it is essential to first understand their strengths and areas of need. To do this, you will need to be able to correctly interpret the literacy (reading, writing, spelling) assessment data you have collected.
Student data might be used to understand a student's ability to decode words of varying complexity. For example, a simple consonant-vowel-consonant (CVC) word such as 'bat' compared to words with greater complexity such as a consonant-consonant-vowel-consonant-consonant (CCVCC) word such as 'blend' or words with digraphs such as 'wish' or trigraphs such as 'right'.
By evaluating the word types that students can and cannot read independently, key diagnostic information can be determined and will assist teachers to determine how best to support individual students.
To get an accurate measure, you can use a student's results on word reading tests, analysing and taking note of the words they answered correctly or made errors. Patterns such as word structure (referring to C and V structure), word length (syllable length, number of letters etc.) and particular graphemes (those that have been taught and those that have not been taught) will provide you with information about a student's reading and spelling needs.
Reading comprehension can also be assessed but it is more complex to accurately measure this compared to word reading. It is important to carefully consider the text type from which students are being asked questions. For example, considerations include:
- familiarity of the content
- complexity of the language structures used
- level of jargon in the vocabulary
- level of abstraction needed to make sense of the text.
It is worth inspecting the question types and whether they require the student to undertake tasks such as:
- recall facts to make inferences
- comprehend literally or with abstraction
- paraphrase simple sentences
- synthesise content
- use synonyms
- recognise the main idea in a paragraph.
If the test comprises multiple texts, note the types of texts that the student comprehended correctly or found challenging.
You can also ask the student to read the text out loud. This will give you information on whether the student is able to read the words of the text accurately, fluently and with appropriate intonation.
Some reading comprehension tests such as York Assessment of Reading Comprehension (YARC), and Neale Analysis of Reading Ability have these requirements automatically built in. Students who cannot 'lift the words off the page' will not be successful at processing and comprehending written text.
Once you have described a student’s abilities in these areas, you can then locate them on a developmental pathway, such as the
National Literacy Learning Progressions or the Victorian
F–10 Curriculum: English. These pathways indicate the year level of a student’s skill in each area and the subsequent knowledge and skills that students should be taught.
Understanding assessment and test results
A student’s performance on tasks and tests can be described in many ways. It's essential to understand these terms and what they mean for how students learn.
For example, some assessments compare your students’ abilities with children of the same age or year level. Others describe knowledge and skills without referring to other students.
Norm-referenced tests
Norm-referenced tests compare a student's abilities with others. These types of tests are carefully designed using psychometric principles so that your student's performance can be compared to the 'expected' range for students in that age or year level (often referred to as a reference group).
Many norm-referenced tests contain a series of sub-tests. Each sub-test has a range of average scores for students who do not have learning difficulties for that task or test. If a student's score is within this range their outcomes will be described as average. If it is well below this range, this may indicate a learning difficulty or disability, such as dyslexia.
Students' outcomes are measured as raw scores and then converted to standard scores, percentile ranks, a stanine score or an age/year level norm.
Criterion-referenced tests
Criterion-referenced tests assess specific skills or knowledge without comparing a particular student to others. They do not tell you a student’s total score performance in relation to an expected range, but whether a student has achieved certain objectives or criteria.
Examples include tests that assess how well students can apply procedures they’ve been recently taught in spelling. In-class quizzes and tests are also examples.
Percentiles
The percentile rank system (0 to 100) tells you how many students scored equal to or below a particular student on a task or test. For example, if a student's score is at the sixteenth percentile this means that in a group of 100 students, 16 students would have had the same score or a lower score on the task or test. This also means that 84 of those 100 students would have had a higher score.
To interpret percentiles, we use the Normal Curve as the base. A percentile rank of 50 is the average. A percentile rank of between 16 and 84 reflects average range. A percentile rank of 20 could be described as low average, and a percentile rank of 80 could be described as high average. Percentile ranks below 16 reflects a performance that is below the expected range for a student of that age and/or year level.
The lower the percentile, the more difficulty is being experienced by the student.
Percentiles and standard scores provide the same information. If your student received a standard score of 85, this would equate to a percentile rank of 16. If your student received a standard score of 115, this would equate to a percentile rank of 84.
Just like standard scores, a percentile rank does not tell you which questions a student answered correctly or incorrectly. It is still important to interrogate the test results.
Stanines
The stanines system divides the range of possible scores on a test into nine groups called stanines. A stanine score tells you which group a student’s score is in, with the lowest score being Stanine 1 and the highest being Stanine 9.
- Below average scores fall into Stanines 1, 2 and 3.
- Average scores fall into Stanines 4, 5 and 6.
- Above average scores fall into Stanines 7, 8 and 9.
Stanines provide a helpful way of looking at data and information about scores. A score in Stanine 2, for example, is as far below the average as Stanine 8 is above the average.
Standard scores
Standard scores provide a measure of a student's performance on a test against other children of the same age. They show how far above or below the average range a particular student's score sits.
To interpret standard scores, we use the Normal Curve as the base. A standard score of 100 is the average. A standard score of between 85 and 115 reflects average range. A standard score of 88 could be described as low average, and a standard score of 112 could be describe as high average. Standard scores below 85 reflect a performance that is below the expected range for a student of that age and/or year level. The lower the score, the more difficulty is being experienced by the student.
A standard score does not tell you which questions a student answered correctly or incorrectly. It is still important to interrogate the test results.
Examples of norm-referenced standardised tests include YARC, and Single Word Spelling Test.
Because these tests are standardised, it is essential that the directions for administration are followed exactly as prescribed. Deviating from the directions will invalidate the derived scores (standard score, percentile, age equivalence or stanine).
For more information refer to
Deciding if a student has a learning difficulty in literacy.
Interpreting student data – examples
The following reading outcomes are for Year 2 students, Mahli and Mason.
The data are from a comprehension test (Reading Progress Test) and individual word reading test (Burt Word Reading Test). The students completed a comprehension test for their Year 2 level initially and then the test for the Year 1 level. For each test, Mahli and Mason were given a standard score, represented as a percentile, that provides a measure of their performance against other children of the same age.
Making comparisons with other students
There's no exact score, or cut-off point, at which a student’s outcomes can be definitively categorised as a learning difficulty. However, a student who consistently scores in the fifteenth percentile or lower for their year level likely has some form of learning difficulty.
For the Reading Progress Test (Year 2) and the Burt Word Reading Test both students scored at or below the fifteenth percentile: Mahli in the fifteenth and fourteenth percentile respectively, and Mason in the second and fifth percentile respectively.
For the Reading Progress Test (Year 1) Mahli scored in the sixty-first percentile, and Mason scored at the twelfth percentile. While Mason’s scores indicate consistent difficulties, as well as the potential presence of a learning disability (such as dyslexia), Mahli’s scores indicate that there may be other factors contributing to their difficulties.
While both Mahli’s and Mason’s scores are low, they do not tell us immediately what the causes are of their comprehension difficulties. Furthermore, their overall scores do not reveal what each student knows and can do. To discover this, it is necessary to look at each item of the assessment to see what skills each student has displayed and the skills they are missing.
Looking for patterns in student data
Mahli achieved low scores in both comprehension and word reading for Year 2 level. Mahli’s word reading may be influencing their reading comprehension, however, this did not appear to restrict their ability to understand texts that were at Year 1 level.
Mason, on the other hand, demonstrated consistent difficulty across both comprehension levels and word reading.
In general, if a student’s word reading accuracy exceeds their comprehension of a text when reading out loud and in responding to comprehension questions, it's possible that:
If comprehension consistently exceeds word reading accuracy when reading out loud or responding to oral comprehension questions, it is likely that the student has developed a strong bank of words that they recall as ‘wholes’ in their visual memory. This would be instead of mastering word decoding skills. While visual memory of whole words may ‘work’ for the first two to three years of school, it is not sustainable and usually, around the mid-primary years, students will experience significant difficulties with text comprehension.
Young students with this profile should be assessed at the level of phonemic awareness and single-word reading. Intervention may also be required.
Identifying possible causes of a learning difficulty
When it comes to formal assessments of students with learning disabilities, such as dyslexia, it is important that they are undertaken by a qualified health professional (for example, psychologist, speech pathologist).
However, formal assessments are appropriate for only three-to-five per cent of students. In most cases, understanding a student’s difficulties and their likely causes will rely on your professional judgement as their teacher, as well as advice from your school’s wellbeing and learning support teams.
To identify possible causes of a student’s learning difficulty in literacy, start by comparing their reading comprehension and word reading ability with their:
- oral language knowledge and skills
- phonological and phonemic knowledge and skills
- orthographic knowledge and skills and rapid naming ability
- metacognition and self-efficacy.
Mahli’s and Mason’s performance on different tasks are shown below.
Listening comprehension and word reading test (percentile and correct)
- Retell the text – Mahli: twenty-third, Mason: first
- Recognise text ideas – Mahli: twenty-fifth, Mason: sixth
- Reading unfamiliar words – Mahli: twenty-third, Mason first
- Reading exceptional words – Mahli: 16, Mason: nine
Assessing and Teaching Phonological Knowledge
- Recognising rhyme patterns – Mahli: 1, Mason: 1
- Say rhyming patterns – Mahli: 1, Mason: 0.85
- Segment onset and rime – Mahli: 1, Mason: 1
- Identify first sound – Mahli: 1, Mason: 1
- Segment into sounds – Mahli: 0.75, Mason: 0.75
- Tap for sounds – Mahli: 1, Mason: 0.75
- Count for sounds (3–5) – Mahli: 1, Mason: 0.5
- Blend onset rhyme (3–5) – Mahli: 1, Mason: 1
- Blend sounds (3–6) – Mahli: 0.75, Mason: 0.25
You could interpret these scores and their patterns in the following ways:
Both Mahli’s and Mason’s results for listening comprehension show that they were in the low range for their year level. When compared with their reading comprehension scores, this data supports that Mahli’s oral language is sufficient to support their understanding of Year 1 texts but not Year 2 texts.
The narrative elements mentioned by a student in their retelling of a text indicates their understanding of the narrative genre. Narratives have a setting or context, an initiating event, an internal response to the event by the protagonist, an attempt to solve it, a consequence and an ending.
Evaluation of the narrative elements mentioned in each retelling task shows that Mahli referred to the setting and the consequences in the story, while Mason made only partial reference to the initiating event. This data reveals what features of narratives each student is familiar with and uses to organise what they’ve heard or read.
The word reading test showed that Mahli correctly read seven of the 20 regular words and 16 of the 20 exceptional words. Mason’s scores indicate a difficulty with most aspects of word decoding. Coupled with their comprehension scores, Mason demonstrates a reading profile consistent with a mixed reading difficulty. This is when a student has difficulty learning to decode and read words, and in extracting meaning from text. Both have difficulty using decoding skills needed to read Year 2 words accurately.
Mahli’s and Mason’s phonological and phonemic skills results suggest that they each have the earliest developmental skills in place. However, they were less confident segmenting longer one-syllable words into individual sounds and blending longer sound strings into words.
These interpretations are linked closely with the data (local or near interpretations). You can also make broad-based interpretations.
Interpreting literacy data
There are two types of interpretations you can make using a student’s outcomes on literacy assessments: local or near interpretations, and broad-based or far interpretations.
Local or near interpretations
Examples of local or near interpretations are interpreting a student’s outcome on a particular test or task, comparing their outcomes on two similar tests or tasks, or inferring where the student is located developmentally in terms of the specific skill. These types of interpretations are shown above.
Broad-based or far interpretations
Broad-based or far interpretations are when you infer less directly. You might infer that, given their listening comprehension scores, Mason’s oral language abilities are less well-developed than Mahli’s, or that a speech pathology assessment is needed for Mason but not for Mahli.
You might also make inferences about working or short-term memory, on-task attention, self-efficacy as readers, and a student’s ability to self-manage their reading.
When making broad-based interpretations about student outcomes it is important to look for supporting evidence and ways to test these assumptions. This can be drawn from classroom behaviours and triangulation data, including observations.
For example, you may begin to monitor Mason’s use of oral language in the classroom; their use and expression of ideas, use of language conventions and ability to use language for social purposes.
You might monitor Mahli’s ability to segment words of five or more sounds into separate sounds, and how they blend strings of sounds. You may also observe how Mahli retains ideas in sequence and how many different ideas they can retain at any given time.