This talk by Gaynor Evans and Jamie Dunlea is based on their research findings from Bangladesh and elsewhere. I met Gaynor at a workshop on language assessment design that I participated in last year in Dhaka. At the time, the Bangladeshi government was rolling out a speaking and listening assessment in their external board examinations for the first time and this was a topic that was often brought up and discussed by the other Bangladeshi workshop participants.
While I was curious about the impact of the new assessment in Bangladesh, the talk was less about Bangladesh and more about the gap between language learning outcomes established by policy and what actually happens. The talk was a bit rushed and I may have missed out some important details. Here’s are some of the key points I managed to note down:
- In Bangladesh, the government is aiming to achieve communicative competence at an A2 level across all four skills for students in grade 10 but the majority of students are at A1.
- Some of the common issues experienced in the context of implementing a speaking and listening assessment include class size, language ability and teacher pedagogic skills.
- In a study by Dr. Rita Green (who led the workshop I attended) across 26 Bangladesh skills, teacher talk in English declined progressively from primary to higher secondary and English was rarely heard at higher levels.
- What’s required is an an evidence based approach to planning and setting goals (this has apparently been achieved in Bangladesh).
- There’s a lack of understanding of language learning outcomes and their interaction with the wider context and results in education reform.
- Some of the possible approaches are apparently listed in this report – English Impact: An Evaluation of English Language Capability, and it recommends a strategy adapted from Amartya Sen’s capabilities approach originally intended in welfare economics as a way of looking at human development as ‘a concentration on freedom to achieve in general and the capabilities to function in particular.’ Here’s how the report describes its adaption to language teaching – I really don’t see how this differs from what governments have always purported to do:
This adaptation of English language capability can therefore be described in terms of the level of achievement, or proficiency, reached by a defined population; and the opportunities provided to them to achieve greater proficiency via teaching and learning practice derived from a policy or national guideline. p.9
- Jamie spoke about a profile builder to understand the educational environment in the country – it wan’t really clear what this tool is or if it indeed is a tool. The report doesn’t mention a profile builder.
- The CEFR is used increasingly outside its ‘home’ in Europe. However, it was never intended to be used as is but was meant to be adapted to the local context. The way it’s being applied now globally is as a very simplistic tool for setting policy goals.
- Even in Europe, there’s a lack of correlation between learning outcomes and CEFR levels and there’s a gap between what governments want and what’s achievable. Across Europe, B2 is the goal for matriculation but in reality proficiency is far lower. For example, France is mostly below A2 in reading and listening.
- Some countries have developed their own frameworks: CEFR-J in Japan and China Standards of English (CSE)
- Planning & resources, goals and time horizon need to be taken into consideration to formulate an evidence based policy.
To summarise, I think the presenters are suggesting that governments need to set realistic goals which are meaningful within their educational contexts and this might require them to develop their own language proficiency frameworks, instead of arbitrarily imposing the CEFR.