Just in case you hadn't seen it, the eBird homepage is now featuring news of a contest we are running next week (24-31 Jan) which will award as a prize 20 versions of BirdsEye (the cool new app that draws on eBird data). All of us at Team eBird are big fans of this new application, which puts much of the best bird-finding info from eBird in the hands of the BirdsEye user. With a commitment to data entry coming down the pike too, we are hopeful that BirdsEye will become an even more useful tool for eBirders in the future.
The full story on the eBird homepage gives details of the contest (basically, the 20 eBirders submitting the most checklists over that time frame will win free versions of the app). The story also provides a summary of the app and what it does, as well as our Team eBird review.
If you have an iPhone though, you might as well register! The app sells for $20, and we don't yet know what kind of totals would be required to win!
If you don;t yet have BirdsEye, we highly recommend it. If you already own BirdsEye, we'd love to hear your thoughts on it. Better yet, comment on it in the App Store.
The rules don't seem very clear; it sounds like the way to hedge your bets is to submit 60 1-minute observations for each hour, submitting an empty list. Even if you only do this for half hour each day, and that's 270 checklists; while I doubt anyone will do that, it does seem like a way to cause issues with the data.
I actually didn't think it was possible to submit an empty checklist, but it's possible. I would expect there to be a check at submission to stop that, but then again it's still effective to record "I tried to find birds but could not find a single one for that 10 minutes I was walking around looking for one"; that's technically a worthwhile data point I would imagine. Now the question is this: is that by design or accident?
I guess I'd like clarification as to whether or not breaking a long observation period into smaller chunks is more beneficial or harmful. Depending on what I see, is it better to do one hour-and-a-half observation or do six 15-minute observations? That's a bit of guidance that would be useful, and not just in the case of this contest.
Heck, if it's an issue that I even mentioned that 60 1-minute observations concept, just delete my comment.
I would think that the capacity to record empty checklists is by design, as they are sometimes highly significant or interesting.
I have submitted a couple empty checklists. One was in the everglades, during the middle of the day. I saw no birds, and I heard a single call note, which I could not ID.
I just submitted one today...a relatively brief checklist, in Massachussets, during the middle of the afternoon.
Rest assured that Team eBird will be inspecting the data coming in for this competition. We didn't think it was necessary to state that any attempts to "cheat" would have a bearing on your reputation as an eBird contributor as well as an effect on whether you win the app.
In general, we do find multiple shorter counts at multiple locations to be more valuable than single long counts. The worst case scenario in the latter are full-day counts covering large distances. With these we lose the information we might have collected on: a) geographic and habitat patterns if this count went from the coast to the mountains, for example; b) time of day patterns; c) multiple repeated counts.
Large numbers of short counts will affect certain bar charts by lowering the average frequency of all species. This is not bad, but it does change the data output, especially if a site with lots of short point counts is compared to one with lots of long ones. The former may record a common bird like Song Sparrow at 60% frequency (quite high) while the latter may get it at 99%. This is not a problem really, but it does change the way some eBird output looks to the user.
On the other hand, to continue with eBird bar charts, it takes 10 or more checklists per week (!) to make a good bar chart. Many point counts per site accomplishes this quickly; for full-day checklists, it takes a LONG time and subtle variation (i.e., is Song or White-throated Sparrow more common) will be missed if both species are recorded on 100% of lists.
Of course, it all depends on what analysis is being done, but when effort is recorded our analysts working with the data scientifically can utilize the effort to predict the variables.
So, for this reason, we do welcome those willing to do LOTS of point counts and do think that extra effort makes it worth the prize. We'll know if we get cheaters though.
And finally, on the topic of "zero bird" lists, these are totally appropriate for eBird. We have an informal competition to see who has submitted the most. I have 10 or more, including: 30 minute lists from pelagic trips where we saw no birds; owling efforts with no birds; point counts using "random" protocol that happened to land in really really bad birding areas; and worst of all, a full day where I spent all day birding and found nothing. Literally, I tried as hard as I could all day. I was within the United States and the date was in early August. I'll let people guess where I was...
Thanks for the response! I had tried to find explanations such as this before, but never had much luck; the contest just brought it back up in my mind and I figured I might as well ask away. Much like the question I had in regards to what to do with observations from FeederWatch (which I got the same answer from both parties). Knowing that there's no wrong way to use eBird is good, especially when it's sometimes hard to find people who use it (in the real world at least).
The rules don't seem very clear; it sounds like the way to hedge your bets is to submit 60 1-minute observations for each hour, submitting an empty list. Even if you only do this for half hour each day, and that's 270 checklists; while I doubt anyone will do that, it does seem like a way to cause issues with the data.
ReplyDeleteI actually didn't think it was possible to submit an empty checklist, but it's possible. I would expect there to be a check at submission to stop that, but then again it's still effective to record "I tried to find birds but could not find a single one for that 10 minutes I was walking around looking for one"; that's technically a worthwhile data point I would imagine. Now the question is this: is that by design or accident?
I guess I'd like clarification as to whether or not breaking a long observation period into smaller chunks is more beneficial or harmful. Depending on what I see, is it better to do one hour-and-a-half observation or do six 15-minute observations? That's a bit of guidance that would be useful, and not just in the case of this contest.
Heck, if it's an issue that I even mentioned that 60 1-minute observations concept, just delete my comment.
I would think that the capacity to record empty checklists is by design, as they are sometimes highly significant or interesting.
DeleteI have submitted a couple empty checklists. One was in the everglades, during the middle of the day. I saw no birds, and I heard a single call note, which I could not ID.
I just submitted one today...a relatively brief checklist, in Massachussets, during the middle of the afternoon.
Rest assured that Team eBird will be inspecting the data coming in for this competition. We didn't think it was necessary to state that any attempts to "cheat" would have a bearing on your reputation as an eBird contributor as well as an effect on whether you win the app.
ReplyDeleteIn general, we do find multiple shorter counts at multiple locations to be more valuable than single long counts. The worst case scenario in the latter are full-day counts covering large distances. With these we lose the information we might have collected on: a) geographic and habitat patterns if this count went from the coast to the mountains, for example; b) time of day patterns; c) multiple repeated counts.
Large numbers of short counts will affect certain bar charts by lowering the average frequency of all species. This is not bad, but it does change the data output, especially if a site with lots of short point counts is compared to one with lots of long ones. The former may record a common bird like Song Sparrow at 60% frequency (quite high) while the latter may get it at 99%. This is not a problem really, but it does change the way some eBird output looks to the user.
On the other hand, to continue with eBird bar charts, it takes 10 or more checklists per week (!) to make a good bar chart. Many point counts per site accomplishes this quickly; for full-day checklists, it takes a LONG time and subtle variation (i.e., is Song or White-throated Sparrow more common) will be missed if both species are recorded on 100% of lists.
Of course, it all depends on what analysis is being done, but when effort is recorded our analysts working with the data scientifically can utilize the effort to predict the variables.
So, for this reason, we do welcome those willing to do LOTS of point counts and do think that extra effort makes it worth the prize. We'll know if we get cheaters though.
And finally, on the topic of "zero bird" lists, these are totally appropriate for eBird. We have an informal competition to see who has submitted the most. I have 10 or more, including: 30 minute lists from pelagic trips where we saw no birds; owling efforts with no birds; point counts using "random" protocol that happened to land in really really bad birding areas; and worst of all, a full day where I spent all day birding and found nothing. Literally, I tried as hard as I could all day. I was within the United States and the date was in early August. I'll let people guess where I was...
Thanks for the response! I had tried to find explanations such as this before, but never had much luck; the contest just brought it back up in my mind and I figured I might as well ask away. Much like the question I had in regards to what to do with observations from FeederWatch (which I got the same answer from both parties). Knowing that there's no wrong way to use eBird is good, especially when it's sometimes hard to find people who use it (in the real world at least).
ReplyDeleteHello,
ReplyDeleteAny idea when the contest winners will be determined / announced?
Thanks!