Dear all, 

I shall neither confirm nor deny the validity of these results, but squinting my eyes, and not taking the various dimensions that we had to account for to make for a hopefully well-balanced conference, then yeah that looks like a very rough average of the several different thresholds we used. Once the dust settles and I’m not totally overwhelmed anymore (right!) Laura and I will sit down and write some sort of account of how things went down this year (follow us on Twitter - @visioncircuits and @tpvogels for updates on that).  Thanks Laurent, for taking a first stab at this though, and keeping us on our toes. 

Did we get it right? Probably not in every way, and what we’ll write will be more more the next year than anything else. Do you have feedback? Please let us know, though I’d like to receive your criticism offline, if possible. It makes it much easier to digest. I hope this helps.

All the best, 

Tim



(sent from my phone)

On 11 Feb 2022, at 15:28, PERRINET Laurent <laurent.perrinet@univ-amu.fr> wrote:



Dear community,

As of today, I have received N = 82 answers from the google form (out of them, 79 are valid) out of the 881 submitted abstracts. In short, the total score is simply the linear sum of the scores relatively weighted by the confidence levels (as stated in the email we received from the chairs) and the threshold is close to 6.05 this year:

2022-02-11_COSYNE-razor

More details in the notebook (or directly in this post) which can also be forked here and interactively modified on binder.

cheers,

Laurent

-- 
Laurent Perrinet - INT (UMR 7289) AMU/CNRS
https://laurentperrinet.github.io/




On 4 Feb 2022, at 09:19, PERRINET Laurent <laurent.perrinet@univ-amu.fr> wrote:

Dear community

COSYNE is a great conference which plays a pivotal role in our field. Raw numbers we were given are

* 881 submitted abstracts
* 215 independent reviewers
* 2639 reviews

If you have submitted an abstract (or several) you have recently received your scores. 

I am not affiliated to COSYNE - yet I would like to contribute in some way and would like to ask one minute of your time to report the raw scores from your reviewers: 


(Do one form per abstract.)

For this crowd-sourcing effort to have a most positive impact, I will share the results and summarize in a few lines them in one week time (11/02). The more numerous your feedbacks, the higher the precision of results!

Thanks in advance for your action,
Laurent


PS: if any similar initiative already exists, I'll be more than willing to receive feedback


-- 
Laurent Perrinet - INT (UMR 7289) AMU/CNRS
https://laurentperrinet.github.io/