ICER 2021
Mon 16 - Thu 19 August 2021 ICER 2021 will take place online.

## Call for Participation

The 17th annual ACM International Computing Education Research (ICER) Conference aims to gather high-quality contributions to the computing education research discipline. We invite submissions about any aspect of teaching and learning computing, using any research methods, including but not limited to empirical studies, novel innovations, new theories, and critical arguments about computing education phenomena.

Research areas of particular interest include:

• discipline based education research (DBER) about computing, computer science, and related disciplines,
• design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
• pedagogical environments fostering computational thinking,
• learning sciences work in the computing content domain,
• psychology of programming,
• learning analytics and educational data mining in computing education contexts,
• learnability of programming languages and tools,
• informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
• measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
• teacher professional development at all levels,
• rigorous replication of empirical work to compare with or extend previous empirical research results,
• systematic literature reviews on computer science education.

## Important Dates

All submission deadlines are “anywhere on Earth” (AoE, UTC-12).

What When
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) Friday, March 19th, 2021, AoE
Full paper submission deadline Friday, March 26th, 2021, AoE
Decisions announced Friday, May 21st, 2021
Minor revisions due Friday, May 28th, 2021
Minor revisions approval notification Friday, June 4th, 2021
Final versions due to TAPS Wednesday, June 16th, 2021, AoE
Published in the ACM Digital Library The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be up to two weeks prior to the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work.

For more details on the review process and timeline, see the reviewer guide.

## Submissions

Submit at the ICER 2021 HotCRP site. When you submit the abstract or full version ready for review, be sure check the checkbox “ready for review” at the bottom of the submission form. (Otherwise it will be marked as a draft).

## Authoring Guidelines

The ICER conference maintains an evolving author guide, full of recommendations about scope, statistics, qualitative methods, theory, and other concerns that may arise when drafting your submission. These guidelines are a ground truth for reviewers; study them closely as you plan your research and prepare your draft.

## Submission Format and Publication Workflow

Papers submitted to the research track of ICER 2021 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.

The most notable change from prior ICER conferences is that the submission format is now different from the publication format, to help separate content from presentation.

• The submission template is either the single column Word Submission Template or the single column LaTeX (using the “manuscript,review,anonymous” style available in template, which you can see an example of in the sample-manuscript.tex example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page
• The publication template is either the single column Word Submission Template or LaTeX template using “sigconf” style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).

For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.

This separation of submission and publication format results in several benefits:

• Improved quality of paper metadata, improving ACM Digital Library search.
• Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.
• Improved accessibility of paper content for people with disabilities.
• Streamlined publication timelines.

One consequence of this new publication workflow is that it is no longer feasible to limit papers by page count, as the single column formats and final two-column formats result in hard-to-predict differences in length. The 2021 PC chairs and ICER Steering Committee considered several policies for how to manage length, and decided to continue to limit length using word count instead. There is no established way to count words, and so here is how we will count for ICER 2021: authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables. The PC chairs will use the following procedures for counting words for TAPS approved formats:

• For papers written in the Microsoft Word template, Word’s built-in word-count mechanism will be used, selecting all text except acknowledgements and references.
• For papers written in the LaTeX template, the document will be converted to plain text using the “ExtractText” functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool (“wc -w”, to be precise). Line numbers added by the “review” class option for LaTeX will be removed prior to counting by using “grep -v -E ‘^[0-9]+\$’” (thanks to N. Brown for this).
• We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:
• Add the following lines at the very beginning of your Overleaf LaTeX document:
%TC:macro \cite [option:text,text]
%TC:macro \citep [option:text,text]
%TC:macro \citet [option:text,text]
%TC:envir table 0 1
%TC:envir table* 0 1
%TC:envir tabular [ignore] word
%TC:envir displaymath 0 word
%TC:envir math 0 word
%TC:envir comment 0 0

• Make sure to write math formulae delimited by \begin{math} \end{math} for in-line math and \begin{displaymath} \end{displaymath} for equations. Do not use dollar signs or ; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
• The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.
• The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).
• Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.
• Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.

A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.

We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer training for more on how reviewers will be instructed to assess conciseness.

All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help make a successful pilot of this new workflow.

## Anonymization

ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviews and the reviewers do not know the identity of the authors. To ensure this:

• Avoid titles that indicate a clearly identifiable research project.
• Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with \documentclass[manuscript,review,anonymous]{acmart} to easily anonymize these.
• Avoid referring to yourself when citing your own work.
• Avoid references to your affiliation. For example, rather than referring to your actual university, you might write “A Large Metropolitan University (ALMU)” rather than “Auckland University of Technology (AUT)”.
• Remove any identifying information in acknowledgements.
• Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.

Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.

Work that is not sufficiently anonymized will be desk-rejected by the PC chairs.

## Conditional Acceptance and Minor Revisions

All papers recommended for acceptance after the Senior PC meeting are either accepted or conditionally accepted. For accepted papers, there is no resubmission required; such papers will go directly to publication. For conditionally accepted papers, meta-reviews will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these minor revisions to HotCRP prior to the “Minor revisions due” deadline in the table above. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved version to TAPS for publication. If the PC judges that the request for minor revisions were not suitably addressed, the paper will be rejected.

Because the turnaround time for minor revisions is only one week, minor revisions will necessarily be very minor: they may include spelling and grammar issues, added clarity, or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, and other substantially new content will not be allowed.

## Conflict of Interests

The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our reviewer training, which details our review process.

## Kudos

After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos “Shareable PDF” integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author’s plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author’s Kudos dashboard.

Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).

The program committee accepted 30 papers out of 158 submitted. In no particular order, they are:

• How should we ‘Explain in plain English’? Voices from the Community
• Max Fowler, University of Illinois
• Binglin Chen, University of Illinois
• Craig Zilles, University of Illinois
• Developing Behavioral Concepts of Higher-Order Functions
• Shriram Krishnamurthi, Brown University
• Kathi Fisler, Brown University
• All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing
• Alex Lishinski, University of Tennessee
• Joshua Rosenberg, University of Tennessee
• Evaluating Proof Blocks Problems as Exam Questions
• Seth Poulsen, University of Illinois at Urbana-Champaign
• Mahesh Viswanathan, University of Illinois at Urbana-Champaign
• Geoffrey Herman, University of Illinois at Urbana-Champaign
• Matthew West, University of Illinois at Urbana-Champaign
• Changing the Online Climate via the Online Students: Effects of Three Curricular Interventions on Online CS Students’ Inclusivity
• Lara Letaw, Oregon State University
• Rosalinda Garcia, Oregon State University
• Heather Garcia, Oregon State University
• Christopher Perdriau, Oregon State University
• Margaret Burnett, Oregon State University
• Early Post-Secondary Student Performance of Adversarial Thinking
• Nicholas Young, Brown University
• Shriram Krishnamurthi, Brown University
• Uses, Revisions, and the Future of Validated Assessments in Computing Education: A Case Study of the FCS1 and SCS1
• Miranda Parker, University of California, Irvine
• Mark Guzdial, University of Michigan
• Allison Elliott Tew, AET Consulting
• A Semblance of Similarity: Student Categorisation of Simple Algorithmic Problem Statements
• James Finnie-Ansley, University of Auckland
• Paul Denny, University of Auckland
• Andrew Luxton-Reilly, The University of Auckland
• Investigating the Role of Cognitive Abilities in Computational Thinking for Young Learners
• Jean Salac, University of Chicago
• Cathy Thomas, Texas State University
• Chloe Butler, Texas State University
• Diana Franklin, University of Chicago
• Algorithm Visualization and the Elusive Modality Effect
• Albina Zavgorodniaia, Aalto University
• Artturi Tilanterä, Aalto University
• Ari Korhonen, Aalto University
• Otto Seppälä, Aalto University
• Arto Hellas, Aalto University
• Juha Sorva Aalto University
• The Relationship Between Sense of Belonging and Student Outcomes in CS1 and Beyond
• Sophia Krause-Levy, University of California San Diego
• William G. Griswold, University of California San Diego
• Leo Porter, University of California San Diego
• Christine Alvarado, University of California San Diego
• NaturalLanguageProcessing4All
• Arthur Hjorth, Aarhus University
• When Wrong is Right: The Instructional Power of Multiple Conceptions
• Lauren Margulieux, Georgia State University
• Paul Denny, The University of Auckland
• Kathryn Cunningham, Northwestern University
• Michael Deutsch, McGill University
• Benjamin Shapiro, Georgia State University
• Teachers’ perspectives on talk in the programming classroom : language as a mediator
• Sue Sentance, Raspberry Pi Foundation
• Jane Waite, Queen Mary University of London
• Teachers’ Views and Experiences on Teaching Second and Subsequent Programming Languages
• Ethel Tshukudu, University of Glasgow
• Quintin Cutts, University of Glasgow
• Alaaeddin Swidan, Open University of the Netherlands
• Olivier Goletti, UCLouvain, Belgium, Leiden University, The Netherlands
• Felienne Hermans, Leiden University
• Children’s implicit and explicit stereotypes on the gender, social skills, and interests of a computer scientist
• Shirley de Wit, Leiden University
• Felienne Hermans, Leiden University
• Fenia Aivaloglou, Leiden University
• Understanding “Dark” Design Roles in Computing Education
• Colin M. Gray, Purdue University
• Shruthi Sai Chivukula, Purdue University
• Kassandra Melkey, Purdue University
• Rhea Manocha, Purdue University
• Understanding Sources of Student Struggle in Early Computer Science Courses
• Adrian Salguero, University of California, San Diego
• William G. Griswold, University of California, San Diego
• Christine Alvarado, University of California, San Diego
• Leo Porter, University of California, San Diego
• Service Interruption: Managing Commitment to Community Partners During a Crisis
• Lecia Barker, University of Colorado Boulder
• Amy Voida, University of Colorado Boulder
• Vaughan Nagy, University of Colorado Boulder
• Identity in K-12 Computer Education Research: A Systematic Literature Review
• Gregor Große-Bölting, Kiel University
• Andreas Mühling, Kiel University
• Computing Educational Activities involving People rather than Things appeal more to Women (Recruitment Perspective)
• Ingrid Maria Christensen, IT University of Copenhagen, Denmark
• Melissa Høegh Marcher, IT University of Copenhagen, Denmark; University of Lodz
• Paweł Grabarczyk, IT University of Copenhagen, Denmark
• Therese Graversen, IT University of Copenhagen, Denmark
• Claus Brabrand, IT University of Copenhagen, Denmark
• Identifying SQL misconceptions of novices: findings from a think-aloud study
• Daphne Miedema, Eindhoven University of Technology
• Efthimia Aivaloglou, Open Universiteit
• George Fletcher, Eindhoven University of Technology
• Incentivized Spacing and Gender in Computer Science Education
• Iman YeckehZaare, University of Michigan - School of Information
• Elijah Fox, University of Michigan
• Gail Grot, San Jose State University
• Sean Chen, University of Illinois at Urbana-Champaign
• Claire Walkosak, Macalester College
• Kevin Kwon, Pepperdine University
• Annelise Hofmann, University of Michigan
• Jessica Steir, University of Michigan
• Olivia McGeough, University of Michigan
• Nealie Silverstein, University of Michigan
• Computing Educational Activities involving People rather than Things appeal more to Women (CS1 Appeal Perspective)
• Melissa Høegh Marcher, IT University of Copenhagen, Denmark
• Ingrid Maria Christensen, IT University of Copenhagen, Denmark
• Paweł Grabarczyk, IT University of Copenhagen, Denmark
• Therese Graversen, IT University of Copenhagen, Denmark
• Claus Brabrand brabrand@itu.dk IT University of Copenhagen, Denmark
• Exploring and influencing teacher grading for block-based programs through rubrics and the GradeSnap tool
• Alexandra Milliken, North Carolina State University
• Veronica Cateté, North Carolina State University
• Ally Limke, North Carolina State University
• Isabella Gransbury, North Carolina State University
• Hannah Chipman, North Carolina State University
• Yihuan Dong, North Carolina State University
• Tiffany Barnes, North Carolina State University
• Re-examining Inequalities in Computer Science Participation from a Bourdieusian Sociological Perspective
• Maria Kallia, University of Glasgow
• Quintin Cutts, University of Glasgow
• You Really Need Help: Exploring Expert Reasons for Intervention During Block-based Programming Assignments
• Yihuan Dong, North Carolina State University
• Preya Shabrina, North Carolina State University
• Samiha Marwan, North Carolina State University
• Tiffany Barnes, North Carolina State University,
• Exploring ‘reverse-tracing’ questions as a means of assessing the tracing skill on computer-based CS 1 exams
• Mohammed Hassan, University of Illinois
• Craig Zilles, University of Illinois
• Understanding Professional Identity of Computer Science Teachers: Design of the Computer Science Teacher Identity Survey
• Lijun Ni, University at Albany
• Tom McKlin, The Findings Group
• Han Hao, Claremont Graduate University