PiE 2016 Volume 34 Issue 4

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • ItemOpen Access
    Unpacking instructional alignment: the influence of teachers’ use of assessment data on instruction
    (Faculty of Education, University of the Free State, 2016) Abrams, Lisa; Varier, Divya; Jackson, Lisa
    The use of assessment data to inform instruction is an important component of a comprehensive standards-based assessment programme. Examining teachers’ data use for instruction can reveal the extent to which instruction is aligned with established content standards and assessment. This paper describes results of a qualitative study of teachers’ data use in a mid-Atlantic metropolitan area in the United States. Focus group interviews with 60 upper elementary and middle school teachers from 45 schools were conducted. Findings indicate teachers aligned instruction and assessments with the state curriculum with the goal of improving student performance. While teachers found day-to-day informal assessments essential to shaping instruction, periodic formal assessments helped them monitor student progress and remediation efforts. Teachers described challenges associated with the misalignment of periodic assessments with instructional content, the breadth of content and higher cognitive demand expected in the newer state curriculum and the lack of infrastructure to support data use.
  • ItemOpen Access
    A maximum likelihood based offline estimation of student capabilities and question difficulties with guessing
    (Faculty of Education, University of the Free State, 2016) Moothedath, Shana; Chaporkar, Prasanna; Belur, Madhu N.
    In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated questions have to be conducted. Many important large exams are offline, for example the Graduated Aptitude Test in Engineering (GATE) and Japanese University Entrance Examination (JUEE). In offline exams, marks are used as the indicator of the students’ capabilities. In this work, our key contribution is to question whether marks obtained are indeed a good measure of students’ capabilities. To this end, we propose an evaluation methodology that mimics the evaluation process of CAT. In our approach, based on the marks scored by students in various questions, we iteratively estimate question parameters such as difficulty, discrimination and the guessing factor as well as student parameters such as capability using the 3-parameter logistic ogive model. Our algorithm uses alternating maximisation to maximise the log likelihood estimate for the questions and students’ parameters given the marks. We compare our approach with marks-based evaluation using simulations. The simulation results show that our approach out performs marks-based evaluation.
  • ItemOpen Access
    Whatever happened to school-based assessment in England’s GCSEs and A levels?
    (Faculty of Education, University of the Free State, 2016) Opposs, Dennis
    For the past 30 years, school-based assessment (SBA) has been a major feature of GCSEs and A levels, the main school examinations in England. SBA has allowed teachers to allocate marks to their students for the level of skills that they show in their work. Such skills include for example, experimental techniques in science, performance in drama and enquiry skills in history. These skills can be difficult to assess validly in written examinations. Of course, SBA can also provide an alternative form of assessment for the same knowledge and skills from timed, written examinations taken at the end of the course of study. At the start of the millennium, concerns grew that plagiarism, excessive input from teachers and parental support were distorting SBA marks and adversely affecting learning. Attempts were made to tighten the arrangements around SBA but in the context of England’s school accountability arrangements; these did not prove wholly successful. The current, substantial reforms have considerably reduced the use of SBA, entirely removing it in some subjects, relying much more on exams. This paper describes the influence of school accountability arrangements on the design of new GCSEs and A levels and includes evidence from a teacher survey of assessment practices in schools. It explores the principles by which decisions have been made regarding the assessment arrangements determined for different subjects. It considers how the reduction in SBA might have a positive influence on the taught curriculum.
  • ItemOpen Access
    The viability of individual oral assessments for learners: insights gained from two intervention evaluations
    (Faculty of Education, University of the Free State, 2016) Prinsloo, C. H.; Harvey, J. C.
    It is essential for learners to develop foundational literacy skills, ideally, in the first grade of formal education. These skills are then firmly entrenched and can be expanded in the following grades to form a basis for all future academic studies. Appropriate assessment practices and tools to aid this process can inform the achievement of quality education. Assessment and the curriculum are intertwined concepts in relation to teaching and learning. Through assessment, it can be established if all learners have attained curriculum content, knowledge and proficiencies in a given year. Furthermore, assessment can assist in advising teachers on which specific areas learners are struggling with as well as provide insight for remedial measures. Together, this can offer ways to improve education. In this article, individual oral assessment using the Early Grade Reading Assessment (EGRA) tool is discussed based on two recent impact evaluations of teacher interventions. Each intervention conceptualised its own theory of change to improve learner language and literacy development. The interventions also differed in relation to the target language; English as First Additional Language and Setswana as Home Language. Despite these differences, using the EGRA tool in both intervention evaluations allowed for a discussion on its usefulness in South Africa. This was done with regard to suitability, reliability and validity, assistance to educators, amendments and suggestions to overcoming challenges related to practicalities. In conclusion, recommendations for improving education and the development of literacy in South African schools are made.
  • ItemOpen Access
    A comparative analysis of pre-equating and post-equating in a large-scale assessment, high stakes examination
    (Faculty of Education, University of the Free State, 2016) Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat
    Statistical procedure used in adjusting test score difficulties on test forms is known as “equating”. Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when applied, are that it facilitates the operational processes of examination bodies in terms of rapid score reporting, quality control and flexibility in the assessment process. The purpose of this study is to ascertain if pre- and post-equating results are comparable. Data for this study, which adopted an equivalent group design method, was taken from the 2012 Unified Tertiary Matriculation Examination (UTME) pre-test and 2013 UTME post-test in Use of English (UOE) subject. A pre-equating model using the 3-parameter (3PL) Item Response Theory (IRT) model was used. IRT software was used for the item calibration. Pre- and post-equating were carried out using 100-items per test form in an UOE test. The results indicate that the raw-score and ability estimates between the pre-equated model and the post-equated model were comparable.
  • ItemOpen Access
    The use of Rasch competency bands for reporting criterion-referenced feedback and curriculum-standards attainment
    (Faculty of Education, University of the Free State, 2016) Combrinck, Celeste; Schoeman, Vanessa; Maree, David
    This study describes how criterion-referenced feedback was produced from English language, mathematics and natural sciences monitoring assessments. The assessments were designed for grades 8 to 11 to give an overall indication of curriculum-standards attained in a given subject over the course of a year (N=1113). The Rasch Item Map method was used to set cut-scores for the Rasch competency bands, after which subject specialists examined the items in each band. Based on the content and difficulty of the items, descriptions for the proficiency levels were generated. Learner reports described each individual’s current proficiency level in a subject area as well as the subsequent level to be attained. This article shows how the Rasch Item Map method can be used to align assessments and curriculum-standards, which facilitates reporting learner performance in terms of criterion-referenced feedback and empowers learners, teachers and parents to focus on subject content and competencies.
  • ItemOpen Access
    A standards-based approach for reporting assessment results in South Africa
    (Faculty of Education, University of the Free State, 2016) Kanjee, Anil; Moloi, Qetelo
    This article proposes the use of a standards-based approach to reporting results from large-scale assessment surveys in South Africa. The use of this approach is intended to address the key shortcomings observed in the current reporting framework prescribed in the national curriculum documents. Using the Angoff method and data from the Annual National Assessments, the article highlights how standard setting procedures should be conducted to develop meaningful reports that provide users with relevant information that can be effectively used to identify and develop appropriate interventions to address learning gaps. The findings of the study produced policy definitions and performance level descriptors that are proposed for use in enhancing the reporting of results for grade six English and mathematics. Moreover, the findings also indicate that the reporting of the Annual National Assessments using the national curriculum reporting categories overestimates the percentage of learners classified at the lowest performance levels and underestimates those in the next category. This finding has serious implications for the implementation of targeted interventions aimed at improving learning for all. The paper concludes by noting areas of further research for enhancing the use of results of large-scale assessment surveys and for supporting schools and teachers in addressing specific learning needs of all learners, especially the poor and marginalised.