Thank you for writing and for entering the FTC Robocall Challenge. Generally, feedback and scoring details are not given on high volume challenges, but I will be sure to pass your request along and update you if we are able to provide you with this information.
Thanks for writing and for entering the FTC Robocall Challenge. I will pass along your request and update you if we are able to provide you with this information.
I'm including the relevant section from the Official Rules on how submissions will be displayed below.
"8. Display of Submissions
A. Eligible Submissions will be posted on the Competition Website on a rolling basis after being screened and/or tested by the Administratorfor basic functionality, accuracy of messaging, integrity (i.e., security), and appropriateness of content. The title, text description, images, and video (if submitted), will be displayed publicly on the Competition Website. The Proposal will only be viewable by authorized employees, officials, and agents of the Sponsor, Administrator, and judges, and shall not be disclosed except as permitted or required by Federal law."
Looking at the Winners I don't see coverage for the challenge functional requirements. Is there a evaluation of these requirements compared to the winning solution? Follwing are what I read as the requirements. These solutions don't cover cell phones as I see it.
BFR 1.0 Blocking RoboCalls: The system will block calls
a. BNF1.1 Automated calls by political parties, charities, and health care providers, as well as reverse 911 calls will be allowed.
b. BNF1.2 Calls identified as “Unknown caller” or on the violation list will be blocked.
2. BFR 2.0 How Many Consumers Will be protected: The system will block calls for consumers who have phone carriers who subscribe to the service.
3. BFR 3.0 What Evidence do you have to support your idea: The functional system system design provided has traceability to the requirements.
4. BFR 4.0 How easy might it be for robocallers to adapt and counter your scheme: The system will block calls.
a. BFR 4.1 Automated calls where the violation number has been identified will be blocked.
b. BFR 4.2 The system can adapt to changing number with a user community reporting that far exceeds the RoboCallers capacity
c. BFR 4.3 Any Security System can be broken: Based on typical IT threat analysis the security of this system may be compromised yet the effort and risk threat doesn’t support the reward.
When I first saw this competition, I was elated. But as it progressed, I began to question the authenticity of it due to the complex nature of understanding the submission, testing, verification, proofs, etc. Now that the winning list is out, I'm equally as shocked. I'm sorry to say, but I think this whole challenge is a fluke from the beginning. It's like an individual without any knowledge of a process asking a question then, using the same basic knowledge to judge the answers. If that's the case, why even bother? Spending 50K, I would expect more from challenger and I'm sure most challengers here would love to see that too but since there's no transparency in the judging process, I had a feeling something like this would happen. What a total waste of time and money. Unfortunately, I'll be sure to steer clear of future FTC challenges.
Tony P, I wholeheartedly agree with your sentiment. I spent a considerable amount of time and effort crafting my submission to address the judging criteria point by point. I am truly amazed, as well as being shocked and dismayed, at the egregiously poor quality of the submissions judged to be winners.
Another thing that I found to be strange about this challenge is that summaries of the submissions were published on the FTC website *before* the closing date for submissions to the challenge, allowing those so inclined to build on the efforts of others, hardly conducive to fair competition.
However, I don't intend to let my efforts be wasted. I have filed a provisional patent based on my submission, and plan to license that solution, which will hopefully yield a return far in excess of the $50K prize.
I'm writing from ChallengePost, the competition platform on which the FTC Robocall Challenge resides. We really appreciate the time everyone has taken to participate in the challenge and create submissions. We're sorry if anyone is upset that they were not chosen as a winner.
We can assure you that just because you did not win does not mean the competition is not legitimate. All Submissions were judged by an expert panel of impartial judges selected by the Sponsor.
In addition, the bulk of the submissions reside in attached documents which are not viewable to the public, so it not possible for the public to see the complete submissions in order to fully and properly evaluate them.
That said, we're sorry if anyone is upset that they were not chosen as a winner, and sincerely appreciate the time and effort put into all the submissions. Our sincere thanks to everyone who participated in the challenge.
Upset because we weren't chosen as winners? I'm sorry, I don't mean to be rude but I find it very unprofessional to brush aside our concerns as being upset. If you look at the history of the postings, you can clearly see, months before the winners were even announced, I had voiced my concerns. Who stated their submission should have won? Honestly, I wonder if the judges are even following the judgement guidelines given the chosen winners. With 50% of the points going to submissions that have proof of concept, I can't fathom how any of these winning submissions surpass those that have working concepts, which I saw quite a few of.
On April 2, I wrote to firstname.lastname@example.org asking for the scores and got via email the same reply that Marny posted above.
On April 9, I submitted a Freedom of Information Act request to the FTC, asking for information about the scoring of this Challenge. FOIA rules say that I'm supposed to get a response in 20 business days; I'll post the results when I get them.
I am seeking the scoring results from the recently-completed FTC Robocall Challenge. The contest specified very explicit criteria for scoring, which was the basis on which the winners were decided.
For each entry, including my own, and that of the three winners, plus the other 794 entries, I would like to see the assigned score in each of the three scoring categories, any scores assigned to the published sub-criteria in each category, and the composite score based on the listed weightings.
If the scoring was done separately by each judge and then combined, I would like to see those separate scores. Similarly, if staff contributed to the scoring, provide those scores prior to aggregation.
If there were additional scoring guidelines or other "rules" published to the judges and/or staff involved in the scoring, provide that.
I know that the detailed submissions themselves are confidential and I am not asking to see those. I do not believe that disclosure of individual numerical scores would be a breach of this confidentiality.
Please reference the data in some form to the publicly-available submission summaries at robocall.challenge.gov.
I would like to see any additional subjective scoring data or comments you have regarding my own submission. It would be best if you make your entire response to this FOIA request publicly available, and you have my permission to disclose any information related to the Challenge which would otherwise be confidential to me (including my submission to the contest).
If providing data for all 798 entries is too burdensome, provide the data for the top-scoring 20 entries, plus my own.
Now that the challenge is over, I see no good reason why the FTC cannot make public both the detailed solutions and grading of those contestants who give the FTC permission to do so; the FTC could provide an online form for that purpose. Those who wish to commercialize their solutions could protect their intellectual property by filing a patent (provisional or nonprovisional) before permitting the disclosure of their solutions.
I spoke to the FTC staff attorney responsible for the Challenge to get some additional explanation of the judging process and what these files contain. If you download and unpack the .zip, you will find five spreadsheet files and a .pdf.
As I now understand it, the submissions went through a couple of screening steps that are covered by the "combined challenge submissions" spreadsheet and the "TextDes_Inteligble_Submissions" spreadsheet; this yielded between 250 and 300 submissions to be further judged.
The judges then spent time reviewing and discussing those submissions, and came up with 7 "finalists". The three judges then assigned their individual scores to those 7 finalists. These scores were combined and the winners were determined.
You can see the scores assigned by the judges for those 7 submissions in the remaining spreadsheets.
So (again, as I understand it) numeric scoring data does not exist for most of the submissions (including mine, for example, since it was not a "finalist").
I have to admit that the FOIA response hasn't cleared up my misgivings about the judging process, but I hope to learn more.
Separately, the FTC gave Challenge participants the opportunity to upload the full description of their submission for public viewing, if they so desired. I could not figure how how to access these uploads, but it was explained to me that they are here: http://ftc.gov/os/comments/robocallchallenge/index.shtm.
David's post raises the obvious question: Why are the scores shown only for the 7 finalists? It appears that the FTC has been less than forthcoming in response to his FOIA request. Is there any reason why the FTC cannot make the scores of all of the contestants available?
What is the meaning of those columns in the spreadsheets labeled "Not Responsive," in which all entries are likewise "Not Responsive." Not responsive to what?