The purposes are:. You will be asked to select a purpose, and then a prompt, when you create an e-asTTle writing test. See Choosing a Writing Prompt for more information. You can use the e-asTTle scoring rubrics to assess a prompt you made up.
|Published (Last):||1 April 2016|
|PDF File Size:||1.80 Mb|
|ePub File Size:||11.91 Mb|
|Price:||Free* [*Free Regsitration Required]|
This section explains some of the underlying test theory used in e-asTTle for those who are interested. For a simplified explanation, see Guidelines on creating a good test. For an explanation of the Multi-facet Rasch Model used in scoring Writing, click here. Consider two Reading tests.
The first has 32 questions, all at curriculum Level 2. The second has 24 much harder questions, entirely at curriculum level 4. The easy test is given to Johnny at the beginning of the year, and the harder test, at the end of the year.
Johnny got all questions right on the first test, but only eight questions correct on the second. Has Johnny improved? It is obvious the tests are not comparable, and when using percentages, there is no way of working out what progress Johnny has made. These scores can be used for comparison, even across test papers that are very different.
The difficulty values are later translated to curriculum levels. The Rasch model is about probabilities. It is based on the idea that we can model how likely it is a student responds correctly to a test question based on the difference between test question difficulty and student ability. Wright, Best Test Design. Mathematically, the logistic curve is used to model how the probability of a correct response relates to how far apart the student and question are in terms of difficulty.
Periodically, the responses to all questions in e-asTTle are extracted into a huge matrix. The raw scores i. For each question and each student, we work out the ratio between percent-correct and percent-incorrect.
This is transformed using a natural logarithmic transformation. Question difficulties are adjusted for the spread of abilities, and person abilities are adjusted for the width of the test they sat.
Then, Joint Maximum Likelihood Estimation is used to iterate through the matrix and fit it towards a logistic curve such as the above. These values are put back into e-asTTle and used to score students.
A student with ability 0. A similar procedure applies to deep, surface and strand scoring. However, here only a subset of questions is used in the calculation process. For example, only Algebra questions are included when generating an Algebra score. Once the questions have assigned difficulty values, a sample of questions is taken and placed in difficulty order — i. A panel of teachers and curriculum experts decide the cut-off points for each curriculum level.
For more information on the Rasch Model, the www. The Rasch Model in e-asTTle. Responsive Theme powered by WordPress.
Writing: Moderation and things to consider
Evaluation Associates consultants are skilled in supporting you to use a range of up-to-date resources, rubrics and self-review tools. Such tools help teachers, leaders, and boards to identify their professional strengths and learning needs and prioritise the support required. The resources below have been developed by Evaluation Associates over our many years working in schools. We offer them to schools to support their professional learning and development. Please be aware that if you are downloading documents from our site for the first time you will be asked to supply your contact details for our records.
The Rasch Model in e-asTTle
Matrices, templates and review tools