The 54th IEEE/ACM International Symposium on Microarchitecture® (MICRO) introduces artifact evaluation (AE) for the first time. AE has become a common practice in the systems community (OSDI, PLDI, PACT, MLSys), and recently, ASPLOS has also successfully introduced AE in the last two years (in ASPLOS 2021 and ASPLOS 2020). We invite the authors of accepted MICRO 2021 papers to submit their artifacts to be assessed based on the ACM Artifact Review and Badging policy. Note that this submission is voluntary and will not influence the final decision regarding the papers.

Upload Artifact Submissions to HotCRP

Important Dates

  • Paper Decision Notification: July 14, 2021
  • Intent to Submit: July 20, 2021
  • Artifact Submission: July 31, 2021
  • Artifact Decision: September 4, 2021

Process

The authors of accepted papers at MICRO 2021 will be invited to submit their artifacts according to the established submission guidelines followed by previous conferences. Submission will be then reviewed according to the reviewing guidelines. Papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL). Authors of such papers will have an option to include a two-page-max artifact appendix to their camera-ready paper. The optional artifact appendix pages will be free of charge.


ACM Reproducibility Badges

ACM Badge: Artifacts Available
Artifacts Available
ACM Badge: Artifacts Evaluated
Artifacts Evaluated — Functional
ACM Badge: Results Reproduced
Results Reproduced

Artifact Submission

An artifact submission consists of two parts:

  1. The paper and a two-page appendix. Please prepare your appendix using the provided template. The appendix is expected to contain the following main sections:
    • an abstract
    • an itemized metainformation list
    • access to the artifact
    • system requirements and dependencies
    • experiment workflow
    • steps for evaluation
    • results
    Note that the paper does not need to be the final version, as the main goal of this submission is to let artifact reviewers reproduce your experiments.
  2. The artifact. Please make your artifact accessible by the reviewing committee. We do not limit the way of code delivery. However, if you would like to apply for the "Artifact Available" badge, you will need to have your artifact available at a public archival repository (for more details, see the reviewing guide).

Please submit your artifact on our submission site. When you submit, please provide details about the artifact's software and hardware requirements. This will be extremely helpful for the Artifact Evaluation Committee to find suitable reviewers.


Benefits

There are major benefits to introducing AE in our conferences.

  1. Dissemination of Ideas: The goal of our research is to disseminate insights and encourage people to build upon that idea. Open-sourcing the artifacts and opening up the ideas to the whole community ensures that the community can work together towards solving an important problem.
  2. Reproducibility of the Results: Artifact evaluation promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches.
  3. Safeguarding the Review Process: AE incentivizes people to conduct research in an ethical manner. The recent example of misconduct in our conference reviewing process has greatly hurt the reputation of this community. Introducing AE can help to restore our integrity and commitment to reproducible and ethical research.

Artifact Evaluation Organization

Artifact Evaluation Co-Chairs Affiliation
Samira Khan University of Virginia
Gennady Pekhimenko University of Toronto
Student Chair Affiliation
Sihang Liu University of Virginia

Selection Committee

Committee Member Affiliation
Utpal Bora Indian Institute of Technology Hyderabad
Hari Cherupalli Synopsys
Deeksha Dangwal University of California, Santa Barbara
Lasse Eggen NTNU
Elba Garza Texas A&M University
Benjamin Ghaemmaghami University of Texas at Austin
Nathan Gober Texas A&M University
Aman Goel University of Michigan
Sneha Goenka Stanford University
Mike He University of Washington
Marcos Horro Universidade da Coruña
Kashif Inayat Incheon National University
Rohan Juneja National University of Singapore
Asif Ali Khan Technische Universität Dresden
He Li University of Cambridge
Yi Li Princeton University
Tianyi Liu University of Texas at San Antonio
Sergey Madaminov Stony Brook University
Abdulrahman Mahmoud Harvard University
Mahmood Naderan-Tahan Ghent University
Marcelo Orenes-Vera Princeton University
Suchita Pati University of Wisconsin–Madison
Pierre-Yves Péneau INRIA
Ananth Krishna Prasad University of Utah
Gokul Subramanian Ravi University of Chicago
Gokulan Ravi Purdue University
Joseph Rogers Norwegian University of Science and Technology
Yongming Shen Stony Brook University
Arun Subramaniyan University of Michigan
Nishil Talati University of Michigan
Cheng Tan Pacific Northwest National Laboratory
Yuke Wang University of California, Santa Barbara
Shijia Wei University of Texas at Austin
Di Wu University of Wisconsin–Madison
Chenhao Xie Pacific Northwest National Laboratory

Frequently Asked Questions

Will my artifacts get rejected if they do not run on the first try?

AE is an iterative process between authors and reviewers. It is a positive and constructive process that makes most artifacts much stronger. The authors can revise their submission and communicate with the reviewers through the submission website.

My artifacts run on special hardware. Can I submit it?

AE supports submissions with specialized hardware and simulators. The authors provide access to their specialized hardware through the submission website. ASPLOS 2021 evaluated artifacts include FPGA prototypes, ASICs, and specialized simulators.

Some parts of my artifacts have IP restrictions. Can I submit it?

AE supports artifacts with IP restrictions. In cases where some parts of the software/hardware stack cannot be shared, we let the authors provide direct access to their platform just to our evaluators so that they can perform measurements directly on those platforms. We have several successful cases of such an approach at ASPLOS 2021. In the end, authors can still receive functional and reproduced badges, but not the available badge.