Tensions in HE disability policy – #1 student choice in DSA provision

Most policies present difficulties for implementation which result in tensions or dilemmas. HE disability policy is no different and this post highlights one such difficulty. The first tension is that there is some kind of student choice built into the system of DSA funding. When I first started in HE nearly 20 years ago there was ‘real’ student choice at least in the university I was working in. Disabled students could choose to organise their own support in a similar way to that which direct payments allowed. This was mainly because we didn’t have the resources nor the infrastructure to take on that responsibility in the disability office. However, it was often repeated to me (mainly by my manager) that the money belonged to the student and therefore they were responsible for using the money as they saw fit (how things have changed). It was a non-means tested grant after all and at the time universal wholesale maintenance grants were still in the memory of most people working in HE.

The reality of course was that this rarely occurred. There was one highly organised student that I was aware of that was hiring and firing her own support workers but I believe she was also doing this for her daily living support arrangements before she had arrived at university. Also, there was very little support in place anywhere so there was no market from which to choose providers.

As I understand it the monies are still supposed to belong to the student. This is part of the reason for continued government rhetoric around student choice in the DSA system.  It was meant to be a system which gave students control over the services which they required.

But what’s the reality? There is no real choice in the provision of DSA. The way that the DSA will be spent is pretty much tied up by the assessment of needs (AoN) report. Whilst there is a quotation system built into the AoN process to ensure value for money, the student is not given a choice of who will supply any elements. The suppliers who are put in the AoN reports are decided upon by the assessment centres and more recently for NMH by the DSA-QAG database. The student is then told by SFE which supplier has been chosen i.e. the cheapest. In theory the student is allowed to swap suppliers but why would they do this? The process of applying for DSA is so complicated that by this stage most students are beyond the point of caring. And if it was really their money they would shop around; but at no stage does a student see the money nor are they given the real role of being a consumer.

Recently, when students have exercised their choice by switching suppliers to in-house provision or university-preferred suppliers, missives have been disseminated by SFE suppressing the practice. As if there is some kind of underhand activity going on. Various bureaucratic barriers have been put in place to stop this happening which makes a mockery of the notion of student choice. The idea being that the market should be left to create its own fairness through the invisible hand – but of course the market is missing an essential element – a ‘real’ customer. So it’s a false market. In one sense the market is working by creating ‘value’ because prices seem to be falling but not because of laissez-faire policies: rather through a much more deterministic two quote system.

Importantly, we shouldn’t forget that students make their choice when they apply for HE study through UCAS. They choose a university based on several criteria including the disability support available. When a student gets books from the library they are not expecting to pay for those services separately. Nor do they get to take their monies and buy their IT services from different places. So why should they be expected to shop around for their disability support? Especially when transition for disabled students is more complicated than for other students and any additional bureaucracy leads to additional barriers to access.

Embedding inclusive teaching and learning in your institution – a 7-step guide

David Hopkins provided a useful summary of educational change approaches in an old (-ish) paper for the Generic Learning and Teaching Subject Network. In an attempt to ‘institutionalise’ inclusive teaching learning and assessment I developed a change initiative in an HEI I worked in recently, which had features built into it that went beyond implementation.

After a small working group was set up to review approaches to government changes to DSA funded support I realised that something much more embedded was required than just another short term task and finish group made up of the already converted.

Having seen many an initiative come and go with little long term impact I tried to utilise as many aspects of Hopkins’ suggestions as possible as changes to beliefs and values require much more than a few champions dotted around the university. And so the ‘Inclusive, learning and teaching framework’  (ILTAF) was born.

Based on previous attempts at ‘auditing’ institutional efforts around disability and equality, I produced a tool which was short and simple enough to ensure completion but complex and broad enough to ensure that some depth of thought was required for departments to complete it. But here’s the rub: the process of filling out the framework (after feedback I stayed away from the dreaded ‘a’ word) had to be built into high level committee structures, sanctioned by high level managers and required an ongoing commitment to embed change.

How was this achieved?

  • Persistence – a number of years of change initiatives (HEA change programme; embedding inclusion into PGCert route; membership of an assessment working group in which inclusive practice was discussed).
  • Consultation – the framework went through every possible committee available which had some link to student experience and/or teaching and learning. And changes incorporated into the design of the tool.
  • Attention to change theory – I wrote an earlier post on Fullan’s work and Hopkins’ paper provides further guidance.
  • Innovative tool design. The framework covers many aspects of teaching, learning, assessment and quality assurance. It is also self-rating so academic staff take ownership and do not feel threatened by outside judgement. It would have been pointless getting central services to ‘audit’ current practice.

The framework also worked on all levels of policy implementation as recommended by Fullan for educational change initiatives:

  • Ratified by senior management:  TOP DOWN
  • Academic managers were given responsibility for completion and return of the framework by a deadline. They completed the tool in consultation with course teams but importantly the tool was sent out by the Registrar’s department (not the disability office): MIDDLE OUT
  • It is based on actual (not normative) practice: BOTTOM UP

Built into the tool is a scoring system – but a potential problem with the self-assessment is how honest would the departments were going to be (no one wants to create work for themselves or leave themselves open to negative criticism). However, these anxieties were countered by requiring departments to give examples and provide case studies of areas in which they scored themselves the highest grade i.e. a 4 or a 5. And departments were given the responsibility of feeding this practice back to other departments. In this way internal expertise was developed and disseminated from within academic departments. We also developed webpages to support initial thought processes and it was intended to populate these pages with case studies and examples of practice. A national conference was also organised.

If departments scored themselves 3 or under they were required to develop an action plan for the next 2 academic years for improvement.

The tool needed to be completed again after those two years so that upward growth and improvement could be achieved. Changes would be made to the ranking system so that the achievement of the highest scoring became more challenging. It was also intended that students would fill out the framework after the first round to provide the student voice and to compare student experience with academic practice.

 

How to re-define dyslexia in higher education – Rose report – part I

The Rose report (2009) drew together opinions from across the education debate to produce an informed direction for supporting dyslexia in the schools’ sector. Rarely discussed in HE circles, the report has many useful ideas to consider relating to the organisation of support.

It defines dyslexia:

  • as a difficulty in learning accurate and fluent word reading and spelling, underpinned by problems with phonological awareness, verbal memory and verbal processing speeds. Phonological awareness approaches to assist learners to improve reading are certainly a key focus of recent approaches in the UK such as those promoted by Maggie Snowling and others, but it’s very unlikely that this is something covered in 1-2-1 specialist tuition session in HE.
  • as a continuum, not a distinct category i.e. there are different levels of severity. By implication assessing someone ‘with’ dyslexia is a complex decision-making process. This an important point to consider for HE as the model is very much about a black and white distinction between someone being given the label of dyslexic vs non-dyslexic. In order to get DSA for example, the student needs to prove they are disabled i.e. dyslexic. This approach is problematic for a number of reasons –  many HEIs have taken this distinction as a means for deciding who can access exam arrangements and what arrangement should be put in place but Rose suggests that there is no sharp dividing line.
  • co-occurring – but these are not on their own markers of dyslexia, because there are a range of overlapping difficulties. For example, it is often suggested that organisational skills might be affected in students with dyslexia, but recent definitions such as in the Rose report suggest that phonological processing is the defining problem. It could be inferred that phonological processing difficulties and verbal memory might interfere with effective planning, but poor organisation is not of itself defining of dyslexia.

Early identification is also emphasised in the Rose report but it is still the case that a large number of students with dyslexia only get formally assessed when they enter HE. I’d estimate anecdotally the rate to be about 20-30%. This could be because they are able to survive at earlier levels of education, but as the level of literacy required gets more and more complex, problems become more acute. However, the story which many students relay, is that there was little support available.

A related problem about finding information on support available at university is also reported. A recent investigation undertaken by postgraduate psychology students at York University demonstrated further barriers which bureaucratic systems of policy implementation create within the sector. For example, whilst many HEPs insist on a post-16 educational psychologist’s report for exam adjustments a number of them don’t offer any support with paying for the report and some have even recently reduced the financial help available.