PACE’s Question Set Certification Program (QSCP) will return for the 2013-2014 season. This program’s aim is to improve the overall quality of regular-difficulty and novice-difficulty house-written high school quizbowl sets by recognizing the top such sets written during the course of the year.
Here are the main changes since last year:
- We will now be awarding a “satisfactory” rating to sets that meet some but not all of the criteria for excellence.
- Certification criteria more clearly defined.
- We will no longer provide feedback on partial sets.
- Institutional changes have been made so that we can consistently provide feedback within 30 days of receiving the set.
Last year, PACE recognized the following exemplary sets. We encourage all question writers to take a look at these sets when producing their own question sets.
- GSAC XX, written and edited by Saumil Bandyopadhyay, Soho Kim, Jordan Bekenstein, Raleigh Matteo, and other members of the Maggie L. Walker Governor’s School team.
- Harvard Fall Tournament VII, head-edited by Stephen Liu and written by members of the Harvard team.
- Ladue Invitational Spring Tournament III, written and edited by Max Schindler, Ben Zhang, Haohang Xu, and other members of the Ladue Horton Watkins High School team.
- Masonic Sectionals 2012, written by David Reinstein with contributions from Donald Taylor.
- JAMES, written and edited by Thomas Jefferson High School for Science and Technology and the Liberal Arts and Science Academy of Austin.
In order to submit a set for certification, send the entire set to PACE President Mike Bentley at email@example.com. After no longer than 30 days, PACE will provide a rating on whether the set is an “exemplary” or “satisfactory” set, or if it does not meet either of those criteria. We will only be judging finished sets.
PACE will recognize sets it has judged to be “exemplary” on its website, on the hsquizbowl forums, and at the awards ceremony of the NSC. “Satisfactory” sets will be listed on its website.
PACE will use the following criteria to judge sets. Based on feedback from last year, we’ve added some specific criteria (starred) to help tournament editors know exactly what we’re looking for.
1. Difficulty control – is this set at the announced difficulty or easier? Does it avoid excessively hard material? Regardless of its stated difficulty can a wide range of teams score points on it (with most averaging several tossups per game and 10 or more points per bonus)?
* These questions are sufficiently answerable for low-level local teams.
* These questions are sufficient to distinguish mid-level local teams.
* These questions are sufficient to distinguish many of the best eligible teams on a game-by-game basis without sacrificing the playing experience of other teams.
2. Length control – Do these questions have enough clues to properly distinguish teams without being longer than the announced length or violating reasonable expectations?
* Tossup questions have multiple clues arranged in descending order of difficulty.
* Individual clues are expressed succinctly without excess verbiage.
* Few to no tossups exceed 650 characters or six lines of 10-point Times New Roman.
* Bonuses and team-directed questions are not excessively long (few to no bonus parts or team-directed question parts exceed 200 characters or two lines of 10-point Times New Roman).
3. Pyramidality and clue quality – Do these tossups use significant, uniquely-identifying, factually correct clues that high schoolers can know, in the order that high schoolers are likely to know them? Do these bonuses have an easy, middle, and harder part which are all possible and significant? Does it show through this set’s questions that more went into this set than cursory searches of previous packets and Wikipedia?
* Clues uniquely refer to the desired answer on a consistent basis.
* Clues have few to no factual errors or misleading ambiguities.
* Some clues across the set are new, creative, and/or capable of presenting important, knowable material in a new way without increasing the difficulty level of the tournament.
* Clues are grounded in the realm of information that intellectually-curious high school students are likely to discover in venues other than college quizbowl tournaments.
* There is no plagiarism from previous question packets or outside sources, and questions render basic definitions and information in newly-written words whenever possible.
4. Comprehensibility and legibility – Does this set use complete, grammatical English sentences? Do these questions make sense to a normal English speaker when read aloud?
* The set is written in grammatical, proofread English with minimal typographical errors
* The text of questions avoids confusing turns of phrase or wordings that are common in quizbowl questions but not common outside quizbowl questions (“quizbowlese”)
Frequently Asked Questions:
Q: What are the differences between exemplary and satisfactory sets?
A: Exemplary sets meet most or all of the above criteria. Satisfactory sets still meet some of the above criteria, but may have issues that prevent PACE from giving them an exemplary rating.
Q: Will PACE be judging specific NAQT or HSAPQ sets?
A: No. PACE has confidence that NAQT and HSAPQ sets meet the above criteria and are thus “exemplary”.
Q: Do sets have to be judged “exemplary” or “satisfactory” in order to meet the bar for “good” as specified in the qualification guidelines for NSC affiliated tournaments?
A: No. It is important for tournament directors to know ahead of time how many teams will qualify for the NSC, and the above tournament certification process will generally take place after a tournament has happened. Thus, PACE will pass a separate, less rigorous judgment on house-written sets for the purposes of determining if the set meets the bar for a “good” affiliated tournament. Any tournament that has been judged to be exemplary or satisfactory through the QSCP process will automatically meet the bar for a “good” tournament for certification purposes.
Q: Will PACE be judging collegiate or middle school sets?
A: No. This program extends only to sets written for high school teams (even if high school teams play end up playing on those sets).
Q: Will PACE be judging novice high school sets?
A: Yes. These sets will be judged similarly to regular difficulty sets, only with extra attention given to elements like difficulty control and length control.
Q: Can a set targeted for upper-echelon high school teams earn an “exemplary” rating?
A: PACE will only give the “exemplary” rating to sets which offer meaningful games to the full range of high school teams, or do the same for the full range of eligible teams in specifically-designated novice fields. Thus, a set which ends up at “nationals prep” difficulty without being sufficiently playable by lower-level teams will not be certified as “exemplary”.
Q: Will PACE re-examine a set that has changed after it has been mirrored?
A: No, the judgment of whether a set is deemed exemplary or not will be based on the initial version of the set sent to the certification committee. The reason for this is to encourage hosts to get the set right the first time.
Q: Can I still get feedback on my set, even if it’s not completely done yet?
A: No, PACE will only be providing feedback on finalized sets.
Q: Will PACE provide feedback for questions I’m writing on my own for non-high school tournaments (i.e. an ACF Fall packet)?
A: No, this program is just for people writing announced high school tournaments. If you want general question writing feedback, please consider the ACF Writing Feedback program.
Q: What are PACE’s motivation for this set certification program?
A: PACE hopes that this program will increase the overall quality of questions used at high school quizbowl tournaments by providing constructive feedback to question writers, and by specially recognizing the best question sets.