A benchmark for end-user structured data exploration and search user interfaces

During the years, it has been possible to assess significant improvements in the computational efficiency of Semantic Web search and exploration systems. However, it has been much harder to assess how well different semantic systems' user interfaces help their users. One of the key factors faci...

Full description

Bibliographic Details
Main Authors: García, Roberto (Author), Gil, Rosa (Author), Bakke, Eirik (Author), Karger, David R (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Elsevier BV, 2021-01-19T22:09:31Z.
Subjects:
Online Access:Get fulltext
LEADER 01861 am a22001933u 4500
001 129459
042 |a dc 
100 1 0 |a García, Roberto  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
700 1 0 |a Gil, Rosa  |e author 
700 1 0 |a Bakke, Eirik  |e author 
700 1 0 |a Karger, David R  |e author 
245 0 0 |a A benchmark for end-user structured data exploration and search user interfaces 
260 |b Elsevier BV,   |c 2021-01-19T22:09:31Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/129459 
520 |a During the years, it has been possible to assess significant improvements in the computational efficiency of Semantic Web search and exploration systems. However, it has been much harder to assess how well different semantic systems' user interfaces help their users. One of the key factors facilitating the advancement of research in a particular field is the ability to compare the performance of different approaches. Though there are many such benchmarks in Semantic Web fields that have experienced significant improvements, this is not the case for Semantic Web user interfaces for data exploration. We propose and demonstrate the use of a benchmark for evaluating such user interfaces, which includes a set of typical user tasks and a well-defined procedure for assigning a measure of performance on those tasks to a semantic system. We have applied the benchmark to four such systems. Moreover, all the required resources to apply the benchmark are openly available online. We intend to initiate a community conversation that will lead to a generally accepted framework for comparing systems and for measuring, and thus encouraging, progress towards better semantic search and exploration tools. 
546 |a en 
655 7 |a Article 
773 |t Journal of Web Semantics