Workload Discovery and Benchmark Synthesis from Public Code Repositories

Activity: Participating in or organising an eventOrganising a conference, workshop, ...

Description

Researchers often rely on benchmarks to demonstrate feasibility or efficiency of their contributions. However, finding the right benchmark suite can be a daunting task - existing benchmark suites may be outdated, known to be flawed, or simply irrelevant for the proposed approach. Creating a proper benchmark suite is challenging, extremely time consuming, and also - unless it becomes widely popular - a thankless endeavor. This talk introduces AutoBench, a novel approach to help researchers find relevant workloads for their experimental evaluation needs. AutoBench relies on the huge number of open-source projects available in public repositories, and on unit testing having become best practice in software development. Using a repository crawler employing pluggable static and dynamic analyses for filtering and workload characterization, AutoBench allows users to automatically find projects with relevant workloads. In this talk, we illustrate AutoBench's approach to find, filter, and characterize real-world workloads from public open-source repositories, and show several motivating scenarios. Preliminary results towards automatic generation of benchmark suites are also presented, arguing that unit tests can provide a viable source of workloads, and that the combination of static and dynamic analysis improves the ability to identify relevant workloads that can serve as the basis for custom benchmark suites.
Period10 Nov 2016
Event typeGuest talk
LocationAustriaShow on map

Fields of science

  • 102029 Practical computer science
  • 102009 Computer simulation
  • 102 Computer Sciences
  • 102011 Formal languages
  • 102022 Software development
  • 102013 Human-computer interaction
  • 102024 Usability research

JKU Focus areas

  • Computation in Informatics and Mathematics
  • Engineering and Natural Sciences (in general)