A Taxonomy of Task Types & Evaluation Framework
Last week Rachel and I got together to compile a taxonomy of task types and use it to create a framework with which we could then evaluate a unit of a coursebook.
I started out by reading Rod Ellis’ chapter on Macro- and Micro-evaluations of Task Based Teaching (in Tomlinson 2012:212-235) which distinguished between six types of task in three categories (focused/unfocused; input-providing/output-prompting; closed/open outcome). While I could fully appreciate the benefits of distinguishing between these types of task, I felt it would be useful to take a step back and begin by identifying what makes something a task – as opposed to what Ellis terms a ‘situational grammar exercise’.
To help with this, I turned to Dave and Jane Willis’ handbook, Doing Task Based Teaching (Willis & Willis 2007:13-15), which provides a set of criteria for evaluating how task-like something is, as well as a summary of different task types.
After some discussion of our reading, Rachel and I then decided to create a framework that would initially give a score for how task-like an activity is and then identify both the type of activity – such as matching, comparing, etc. – and whether it was focused or unfocused, input-providing or output-prompting, and had a closed or open outcome.
Here’s the glossary we put together to accompany the evaluation framework – in part for our own reference and also so that it might be of future use for colleagues and peers.
With Rachel’s Excel expertise, and my organisational passions, we began by creating a spreadsheet of the above tested it out by applying it to a unit of a coursebook. We chose SpeakOut Intermediate (Clare & Wilson 2001, Harlow:Pearson) because it’s a book we’ve both used before and are relatively familiar with, and decided to focus on unit six as the topic (emotion) seemed interesting. Unit six is comprised of four sub-sections, so we began by evaluating 6.1. This process of testing out the evaluation framework was extremely useful, as it enabled us to identify gaps in the activity types as well as reorganise and expand the framework as we went along. The activity types highlighted in green show our own additions made during the evaluation process.
Here are the results for 6.1:
So each ‘exercise’ ended up with a score of task-ness out of 8, with 8 being very task-like. The second column shows how the book identifies its parts – under headings such as ‘speaking’, ‘grammar’, etc. Note that we also included activities in the vocabulary and language banks where they corresponded with sub-sections in the unit.
It was interesting to observe that the grammar and vocabulary sections were almost entirely comprised of situational grammar exercises (SGEs) as opposed to tasks.
Here are our results for 6.2, 6.3 and 6.4.
After completing our evaluation of the unit we compiled the results into tables to give an overview of our findings. Scroll down for a more visual representation…
In order to process the results more easily, I used PiktoChart (a free online programme) to create an infographic to show the main areas we had evaluated (on the whole more readable and digestible):
Observations & Conclusions
- While the process felt very much like data gathering, the results were very revealing and it was really useful to get a bigger picture of the task types in the unit.
- That said, it has to be noted that there’s some degree of subjectivity in the evaluation itself. Giving the same framework and same unit to different teachers might reveal different results, so the evaluation is qualitative as well as quantitative.
- This framework, and the activity types included, is far from exhaustive. Others in the Diploma group identified different approaches to evaluating task types – such as Anna’s (rather brilliant) idea of using Bloom’s Taxonomy of higher and lower order thinking skills.
- Evaluation of one unit in isolation from the rest of the coursebook isn’t necessarily an accurate reflection of the book in its entirety. As this is also evaluation of only task types (as opposed to other factors, such as topic, visuals, authentic language, etc, identified in an earlier post on coursebook evaluation) it’s not enough to base a decision on about whether to use the coursebook.
- Given that the evaluation process is very time-consuming, I have doubts about the practicality of the framework for future use – by us and by other teachers. That said, this process of creating and applying it has identified some important elements for consideration which I hope will remain when selecting a coursebook based on ‘intuition’ (which – as I observe in a previous post – is largely influenced by experience).
- Our feelings after discussing our results were that although on first glance the unit seems very task-based, the lack of balance in activity types and skills, in combination with the bias toward closed-outcome task types would deter us from choosing the book. That fact in itself though, reveals the importance of context, as in some situations and for some learners we might feel it more suitable than for others. As with methodology and approaches, materials are of course, always context dependent.
Just a little thought here about something that came up in last week’s input session… I may be wrong, but I get the impression that so much of the content, skills distribution and task types in coursebooks are dictated by the publishers – as opposed to the writers. As a result, the job of writing a coursebook sounds almost mechanical and seems to leave little room for creativity on the part of the authors. Where then is the pleasure in coursebook writing? This makes me more inclined, if I go into materials development, to go down the route of creating resource books and online materials (like those brilliant people at The Round) rather than coursebooks.