how can we process massive data that require demanding computation?
Creating new data-intensive services in terms of dataset size and data processing is an onerous and costly process that requires deep expertise.
It requires high performance beyond what commodity systems can achieve, describing business logic typically by writing applications code, complex software stacks that are hard to deploy and maintain, and the need to use dedicated, per application, testbeds for achieving the desired performance levels. However, most organizations today lack these resources and the associated expertise.
EVOLVE is addressing these issues as it offers new HPC-enabled capabilities in data analytics for processing massive and demanding datasets without requiring extensive IT expertise.
EVOLVE aims to build a large-scale testbed by integrating technology from three areas:
An advanced computing platform with HPC features and systems software.
A versatile big-data processing stack for end-to-end workflows.
Ease of deployment, access, and use in a shared manner, while addressing data protection
Evolve's benefits for processing large and demanding datasets
Reduced turn-around time for domain-experts, industry (large and SMEs), and end-users.
Increased productivity when designing new products and services, by processing large datasets.
Reduced capital and operational costs for acquiring and maintaining computing infrastructure.
Accelerated innovation via faster design and deployment of innovative services that unleash creativity.
EVOLVE is a European Innovation Action funded by the European Union's Horizon 2020 Research and Innovation programme. The project is composed by 19 specialised partners from 11 European countries.