On requirements for federated data integration as a compilation process

Adamou, Alessandro and d'Aquin, Mathieu (2015). On requirements for federated data integration as a compilation process. In: Joint Proceedings of the 5th International Workshop on Using the Web in the Age of Data (USEWOD '15) and the 2nd International Workshop on Dataset PROFIling and fEderated Search for Linked Data (PROFILES '15) (Berendt, Bettina; Dragan, Laura; Hollink, Laura; Luczak-Rösch, Markus; Demidova, Elena; Dietze, Stefan; Szymanski, Julian and Breslin, John eds.), CEUR Workshop Proceedings, CEUR-WS.org, pp. 75–80.

URL: http://ceur-ws.org/Vol-1362/PROFILES2015_paper4.pd...

Abstract

Data integration problems are commonly viewed as interoperability issues, where the burden of reaching a common ground for exchanging data is distributed across the peers involved in the process. While apparently an effective approach towards standardization and interoperability, it poses a constraint to data providers who, for a variety of reasons, require backwards compatibility with proprietary or non-standard mechanisms. Publishing a holistic data API is one such use case, where a single peer performs most of the integration work in a many-to-one scenario. Incidentally, this is also the base setting of software compilers, whose operational model is comprised of phases that perform analysis, linkage and assembly of source code and generation of intermediate code. There are several analogies with a data integration process, more so with data that live in the Semantic Web, but what requirements would a data provider need to satisfy, for an integrator to be able to query and transform its data effectively, with no further enforcements on the provider? With this paper, we inquire into what practices and essential prerequisites could turn this intuition into a concrete and exploitable vision, within Linked Data and beyond.

Viewing alternatives

Download history

Item Actions

Export

About