Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
2005 Q8
#1
I've been brave and attempted one of those open questions. I think this answer is longer than I could do in timed conditions, so any pointers on where I might have focused my effort would be handy.


Attached Files
.pdf   2005Module7Exam_paper.pdf (Size: 77.29 KB / Downloads: 35)
.pdf   IRSE-Mod7-2005-Q8-DAP.pdf (Size: 538.38 KB / Downloads: 39)
Reply
#2
Question 8
Discuss the statement:
“The involvement of humans in train driving and signalling control introduces risks to the safety and efficiency of railway operations. These risks could be eliminated by the application of automatic systems”.

My interpretation, on the basis of the question being a statement:
The first part is true. Humans are fallible and can (and do) cause errors. Many have minimal effects and are not noted by the system en masse, the user or others.

Part two is the numb of the problem in that whilst in an ideal world, it is true, in our world is certainly isn't. However, DLR runs on a driverless system as are more metros worldwide. Here we head into the world of tolerable risk.

So, should we assume the examiner is asking whether specific human risks for signallers and drivers, i.e. fatigue, distraction, commercial pressure [late running], misinterpretation can be eliminated? That should be an easy argument to make but I don't believe holds many points. There are other risks that would be imparted onto the railway by doing so and as Dorothy says, recovery from an abnormal event could be dramatically increased. A human can, and they do, interpret situations based on their experience which can mitigate a situation that is building in the background.

Automatic systems are only as good as their design. Competence of staff, a decent set of requirements and minimising the value engineering can provide a very able system that will adapt to conditions, timetables and its environment. These exist in the UK and have done for many years. Of course, the time a system or human is tested is when something abnormal is happening. Hence, at what point do we engineers make the ALARP judgement. Eliminating human derived risk and hazards will inevitably fall short of an ALARP argument even if it is relatively easy to mitigate most risk on a simple railway. UK rail is rarely simple!

I am not sure de-skilling is a good point to be made or efficiency of automated systems or even degraded modes. It might be worth discussing reduction of frequency of certain risks but introducing others. However, the conclusion would have to be it is impossible to eliminate these risks due to the cost and complexity of the systems required to do so. Discussion would be worthwhile around three or four key topics such as: reducing misinterpretations of indications/signals, traffic management, task specific programming overcoming an SPF in an overstressed individual....

I've waffled on for too long. As a parting gesture, it is important to understand "risk".
Hazard - something that causes harm [is negative].
Risk - the probability of the hazard occurring (frequency x severity).
Mitigation - something put in place to reduce the risk or eliminate the hazard (these are not quite the same as each other).

The examiners often confuse the terms so kudos for Dorothy for stating her interpretation of what the term meant.

J
Le coureur
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)