1st Championship Value Prediction (CVP-1)

in conjunction with:
ISCA-45  http://iscaconf.org/isca2018/index.html
Sponsored by Qualcomm

The workshop on computer architecture competitions is a forum for holding competitions to evaluate computer architecture research topics. The sixth JWAC workshop is organized around a competition for value prediction algorithms. The Championship Value Prediction (CVP) invites contestants to submit their value prediction code to participate in this competition. Contestants will be given a fixed storage budget to implement their best predictors on a common evaluation framework provided by the organizing committee.

Objective

The goal for this competition is to compare different value prediction algorithms in a common framework. Predictors will be evaluated for all instructions producing a general-purpose register. Predictors must be implemented within a fixed storage budget as specified in the competition rules. The simple and transparent evaluation process enables dissemination of results and techniques to the larger computer architecture community and allows independent verification of results.

 

Prizes

The championship will have three tracks, each designing value predictors with different storage budgets: 8KB, 32KB, and unlimited size. In each category an additional side buffer of unbounded size is allowed for tracking additional information used by the predictor (e.g., global history). The top performer for each track will receive a trophy commemorating his/her triumph (OR some other prize to be determined later). Top submissions will be invited to present at the workshop, when results will be announced. All source code, write-ups and performance results will be made publicly available through the CVP-1 website.

 

Submission Requirements

Each submission should include an abstract, write up (up to 4 pages using a standard double-column template), and predictor code. We should be able to simulate your predictor with a reasonable amount of memory (not exceeding 16GB), and within six hours of simulation time. Also, your predictors must not violate causality (cannot use future information to predict the current value). Furthermore, you are not allowed to spawn another thread from your predictor code.



Link to Papers, Presentations and Code  



Link to Public (135, 30M) and secret (2013, 100M) traces and trace reader @TAMU



CVP1 Kit v5 @NCSU (Also see the README )


CVP1 Kit v5 without boost dependency @NCSU (Also see the README )





Please register with the cpv-1 google group for updates !


Click here for submission instructions.



    

 

Competition Rules

 

The competition will proceed as follows. Contestants are responsible for implementing and evaluating their algorithm in the distributed framework. An initial set of 135 traces of industry workloads (30 million instructions) will be released to the competitors with the distributed framework. Submissions will be compiled and run with the original version of the framework. Quantitatively assessing the cost/complexity of predictors is difficult. To simplify the review process, maximize transparency, and minimize the role of subjectivity in selecting a champion, CVP-1 will make no attempt to assess the cost/complexity of predictor algorithms. All predictors must be implemented within the constraints of the 8KB, 32KB, and unlimited budget category. Competitors can choose not to compete in a particular budget category. In each budget category an additional unbounded buffer is allowed for tracking additional information used by the predictor (e.g., global history). Clear documentation, in the code as well as the paper write up, must be provided to assure that this is the case. Predictors will be scored on overall cycle count on a different set of evaluation traces than the one provided to the contestants. The final evaluation traces will not be available to the public after the final evaluation. The geometric mean of IPC of the final evaluation traces will be used as the final score of a predictor. Predictors are not allowed to "profile" traces in order to adjust their algorithms for a particular trace or group of traces, nor are they allowed to violate causality (cannot use future information to predict the current value). Furthermore, competitors are not allowed to spawn another thread from the predictor code.

 

 

Acceptance Criteria

 

In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is the primary component. To be considered, submissions must conform to the submission requirements described above. Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, practicality of the predictor, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously published design or make incremental enhancements to a previously proposed design. In such cases, overall cycle count is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.).

 




 


Important Dates

 

Competition formally announced:

Mid January 2018

Evaluation framework available:

Early February 2018

Submissions due:

April 1st 2018 at 11:59 PM CST

Acceptance notification:

April 10th, 2018

Camera Ready version due:

May 18th, 2018

Results announced:

At ISCA workshop (Sunday morning, June 3rd, 2018)


Steering Committee

Alaa R. Alameldeen (Intel)

Chris Wilkerson (Nvidia)

 

 

 

 

Organizing Committee

Arthur Perais (Qualcomm) (co-chair)

Rami Sheikh (Qualcomm) (co-chair)

Eric Rotenberg (NCSU)

Vinesh Srinivasan (NCSU)

 

 

 

 

Program Chair

Mikko Lipasti (Wisconsin-Madison)

 

 

Program Committee

Joshua San Miguel (Wisconsin-Madison)

Lixin Su (ARM)

Eric Rotenberg (NCSU)

Rangeen Basu Roy Chowdhury (Intel)

Huiyang Zhou (NCSU)

Alaa Alameldeen (Intel)

Freddy Gabbay (Mellanox Technologies)

Yannakis Sazeides (University of Cyprus)

Vinesh Srinivasan (NCSU)

Chris Wilkerson (Nvidia)

Manjunath Shevgoor (Intel)

Rami Sheikh (Qualcomm)

Arthur Perais (Qualcomm)

Bob Rychlik (Qualcomm)

Jeff Rupley (Samsung)