To say that no two learners are the same is almost an axiomatic statement. This is clear to anyone who cares to think about it for a moment. If you take a classroom full of learners, give them the same instructions, under the same conditions while using the same assessments, you’ll get very different results.
For an assessment that has been standardized, you should see a normal or “bell” curve when looking at the distribution of results. The sort we see with things like IQ tests. This is of course by design, since the difficulty curve of the assessment is adjusted to cleanly divide the population into percentiles. The IQ test in particular has to be standardized again every few years as IQ scores creep ever upwards. This is something known as the “Flynn Effect”, where the whole population keeps scoring better on the current IQ test and the average has to be shifted along. In other words, someone who had an IQ of 100 in 1930, may only score score an 80 (for example) in the modern test.
Assessment of training or other forms of learning however, are not standardized around population parameters in this way. Rather they are based around the required level of mastery of a particular competency or group of competencies . In other words, they are usually aimed at reaching a particular outcome, such as being able to weld a joint to a particular standard or complete a safety inspection in a specific way.
Ideally we want every learner who attempts to attain mastery of a particular competency to do so successfully, but in practice some percentage of learners will fail to do this. That percentage will be small for relatively easy competencies such as basic word processor skills and much higher for complex subjects such as theoretical physics. Whatever the absolute difficulty level of a particular competency, for a given program there will always be candidates who fail to overcome it.
There are also a great many false explanation for variable individual performance. Multiple intelligences and learning styles are two common ones that simple do not have credible evidence to back them up.The bottom line is that instructional designers have a key role in the overall success rate of a given course, but have limited power over reducing the failure rate. This may seem counterintuitive, but when you think about it there’s sense to this notion. A poorly designed course may see the majority of learners fail it and learn nothing useful. A very well-designed course may have 70% or even 80% of learners reach the desired competency level, but further refinements to the design have only diminished returns. One may even find that making changes to the general course material (which affects all learners) to accommodate those who are having difficulties, may degrade the performance of more able students thanks to lower motivation or interest.
This situation is to a great extent the product of the industrial age classroom curriculum model. There is one test and one course design for everyone. Most people do OK, some people do brilliantly and the rest are collateral. This wasn’t always the main model of teaching and design however. Before the industrial age demanded a massive number of educated citizens with just enough knowledge to read, write, calculate and (most importantly) follow instructions, the few people lucky enough to get an education did so in a much different way.
That was an age of masters and apprentices. Of small numbers of students taking their own journeys to becoming journeymen and masters in their own right. This was not a battery chicken, production line model of education. This was a labour-intensive, quality-focused approach to teaching someone a vocation. It’s no wonder it didn’t survive the economic demands of mass production and industrialization.
Under the undivided attention of an expert it’s possible to get the best out of a given student. You can get fine-grained feedback, detect misunderstandings and take not of external factors. In other words, whatever a student’s potential may be, a one-on-one teaching relationship without a time component can maximize it.
Back in the real world, where economic imperative are a strong driver of performance, this kind of thing is uneconomical and too expensive. This gives us the first tentative question to the question in the title. If you don’t care about cost or time, it makes a lot of sense to treat learners differently based on their capabilities.
The same is true of instructional design. The only time we can truly justify treating students differently based on their individual capabilities is if the return on investment justifies it. If you have to rely on infrequent assessment, human labour and paper the answers to this is probably that the return on investment will be inadequate. In this connected, digital age it has become possible to both maintain cost-effectiveness and to customize teaching on an individual basis. Using digital technology like xapiapps we can keep track of everything the learner does and flag it if an issue is detected. Learning can now be adaptive in a practical way, without relying on massive amounts of time or human resources.
Of course, setting this all up requires considerable knowledge and some investment, but can provide gains in students success rates that are proportionate to the effort required to facilitate those gains.
So, the long answer is that yes, there is definitely a case for treating people based on individual capabilities and differences. However, crude individualization categories related to problematic ideas such as learning styles and multiple intelligences cause more harm than good in this regard.
For differentiated treatment of learners to have any meaningful outcome it really had to be individual in nature. Just about any learner will benefit from increased instruction time, instructions tailored to their level of understanding and engagement with their specific misunderstandings and problems. It’s not rocket science. It simply comes down to whether the time, resources and effort make it worthwhile or not. Using the right intelligent automation tools can make it so.