![]() | This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see
Wikipedia:So you made a userspace draft. Find sources:
Google (
books ·
news ·
scholar ·
free images ·
WP refs) ·
FENS ·
JSTOR ·
TWL |
File:DIET logo | |
Developer(s) | INRIA, École Normale Supérieure de Lyon, SysFera, CNRS, Claude Bernard University Lyon 1 |
---|---|
Stable release | 2.6.1
/ 04/11/11 |
Written in | C++, CORBA |
Operating system | Cross-platform |
Type | Grid and Cloud Computing |
License | CeCILL |
Website |
graal |
DIET is a middleware that was created in 2000 [1]. It was designed for high-performance computing. It is currently developed by INRIA, École Normale Supérieure de Lyon, SysFera, CNRS, Claude Bernard University Lyon 1. It is open-source software released under the CeCILL license.
Like NetSolve/GridSolve and Ninf, DIET is compliant with the GridRPC standard from the Open Grid Forum [2].
The aim of the DIET project is to develop a set of tools to build computational servers. The distributed resources are managed in a transparent way through the middleware. It can work with workstations, clusters, Grids and Clouds.
DIET is used to manage the Décrypthon Grid installed by IBM in 6 French universities ( Bordeaux 1, Lille 1, Paris 6, ENS Lyon, Crihan in Rouen, Orsay).
Usually, GridRPC environments have five different components: clients that submit problems to servers, servers that solve the problems sent by clients, a database that contains information about software and hardware resources, a scheduler that chooses an appropriate server depending on the problem sent and the information contained in the database, and monitors that get information about the status of the computational resources.
DIET's architecture follows a different design. It is composed of:
Two approaches were developed:
For workflow management, DIET uses an additional entity called MA DAG. This entity can work in two modes: one in which it defines a complete scheduling of the workflow (ordering and mapping), and one in which it defines only an ordering for the workflow execution. Mapping is then done in the next step by the client, using the Master Agent to find the server where the workflow services should be run.
DIET provides a degree of control over the scheduling subsystem via plug-in schedulers [3]. When a service request from an application arrives at a SeD, the SeD creates a performance-estimation vector, a collection of performance-estimation values that are pertinent to the scheduling process for that application. The values to be stored in this structure can be either values provided by CoRI (Collectors of Resource Information) or custom values generated by the SeD itself. The design of the estimation vector's subsystem is modular.
CoRI generates a basic set of performance-estimation values which are stored in the estimation vector and identified by system-defined tags. The following table lists the tags that may be generated by a standard CoRI installation.
Information tag starts with EST | multi-value | Explanation |
---|---|---|
TCOMP | the predicted time to solve a problem (s) | |
TIMESINCELASTSOLVE | time since the last solve has been made | |
FREECPU | amount of free CPU between 0 and 1 | |
LOADAVG | average CPU load | |
FREEMEM | amount of free memory (Mb) | |
NBCPU | number of available CPUs | |
CPUSPEED | Yes | frequency of the CPUs (MHz) |
TOTALMEM | total memory size (Mb) | |
BOGOMIPS | Yes | the BogoMips |
CACHECPU | Yes | cache size of the CPUs (Kb) |
TOTALSIZEDISK | size of the partition (Mb) | |
FREESIZEDISK | amount of free space on partition (Mb) | |
DISKACCESREAD | average time to read from disk (Mb/sec) | |
DISKACCESWRITE | average time to write to disk (Mb/sec) | |
ALLINFOS | Yes | [empty] fill all possible fields |
Three differents data managers have been integrated into DIET:
Parallel resources are generally accessible through a LRMS (Local Resource Management System), also called a batch system. DIET provides an interface with several existing LRMS to execute jobs: LoadLeveler on IBM resources, OpenPBS which is a fork of the well-know PBS system, and OAR developped by IMAG at Grenoble, and used on the Grid'5000 research grid. Most of the submitted jobs are parallel jobs, coded using the MPI standard with an instantiation such as MPICH or LAM.
A Cloud extension for DIET was created in 2009 [5]. DIET is thus able to access Cloud resources through two existing Cloud providers:
{{
cite journal}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (
help)
{{
cite book}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (
help)
{{
cite journal}}
: Check date values in: |date=
(
help); Unknown parameter |coauthors=
ignored (|author=
suggested) (
help)CS1 maint: date and year (
link)
{{
cite journal}}
: Check date values in: |date=
(
help); Unknown parameter |coauthors=
ignored (|author=
suggested) (
help)CS1 maint: date and year (
link)
{{
cite journal}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (
help)CS1 maint: date and year (
link)
Category:Cloud computing
Category:Grid computing products
Category:Workflow technology