top of page

Project Evaluation and subsidiary question

A+ below this grade, the probability of getting a grant decreases drastically. The standard of the project evaluation consists in three steps: (i) a reviewing by external reviewers, (ii) a first ranking by the grant giving institution is produced, (iii) a discussion by a scientific board to establish the final ranking. Eventually, an additional politico-administrative layer on which the scientific persons have no influence (no influence) gives the final answer YES or NO.


No decision is not an option.


The result results in a great Joy or in a terrible, horrible, sad and awful frustration. The disappointed (at last during the first 6 hours) researcher is then waiting for the reviewing reports along which Strong, Weakness, Opportunity and Threat are analyzed using the modern managing vocabulary and methods in order to be as objective as possible.


For sure, everybody in the chain does his/her best, there is no question to question the validity and honesty of the reviewers and of the board. I witness these processes quite often and everybody tries a maximum to select the best manner to spend the taxpayer money. We are so fed up to see scandals in the newspaper that we want to be irrefutable. This costs a lot of time (and then money; this is another question I should discuss in another note about the URR Universal Research Revenu). Among the parameters to quote, one finds the profile and the previous work of the submitter (roughly speaking h index and Cie), the project (relevancy, feasibility) and the impact. For the later, a cristal ball is necessary. And, as a scientist, comparing the impact of two projects is like presuposing that one field is more important than another one: what is by essence subjective. At the end, this impact criterium is the most important and the less objective.


The second problem is the step (ii) which directly results from the step (i). In order to be efficient, the administrative staff (who deals with all the researchers fears and frustration; we never thank them enough for the big job) builds a pre-ranking on the basis of the objective evaluation of the evaluators. They are supposed objective in the sense that they transformed their appreciation into a number. There are 3 to 6 evaluators. Imagine that one of them does think that the project has a not-that-high-impact, the project is directly not A+. (The statistics would ask the operator to suppress the best and the worse evaluations and to compute the median and not the average). At step (iii), since the board has to deal with dozen and dozen of submissions (scientific community is rich in ideas; we are paid for that), the A+ and the F- are easy to rank. Then the """rest""": all very good or excellent PI, all very good or excellent projects. What to do? How to make the evaluation robust in front of a subjective impression on the project? A math colleague proposed to draw lot. The hazard can make the decision and one cannot be frustrated against the hazard.


I am not a lucky guy. I would propose another system: the subsidiary question. Subsidiary comes from the latin subsidium that means 'assistance, help'. In French, un subside is a help grant given by the State and then question subsidiaire=the question that gives the grant. In any contest, you have a subsidiary question. Why not...in science. The final question of the proposal is "How many sticks are there in that assembly here below?". The judgement is based on your judgement and your skill. A little more less frustrating. Just enough to be disappointed during 3 hours instead of 6.












Comments


bottom of page