Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Emeras, Joseph"'
Autor:
Emeras, Joseph
L'auteur n'a pas fourni de résumé en français
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number
Externí odkaz:
http://www.theses.fr/2013GRENM081/document
Autor:
Emeras, Joseph
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary
Externí odkaz:
http://tel.archives-ouvertes.fr/tel-00940055
http://tel.archives-ouvertes.fr/docs/00/94/00/55/PDF/thesis.pdf
http://tel.archives-ouvertes.fr/docs/00/94/00/55/PDF/thesis.pdf
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Emeras, Joseph
Publikováno v:
Networking and Internet Architecture [cs.NI]. Université de Grenoble, 2013. English. ⟨NNT : 2013GRENM081⟩
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______2592::9dc22587cdadd64b931b1e6349198983
https://tel.archives-ouvertes.fr/tel-00940055/document
https://tel.archives-ouvertes.fr/tel-00940055/document
Publikováno v:
ComPAS'2013 Proceedings
ComPAS'2013 Proceedings, 2013, Grenoble, France
ComPAS'2013 Proceedings, 2013, Grenoble, France
National audience; no abstract
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::61c67ec1b0c1ae22c974a2e94cd6dfb8
https://hal.inria.fr/hal-00916284
https://hal.inria.fr/hal-00916284
Publikováno v:
PPAM'2013
PPAM'2013, 2013, Warsaw, Poland
PPAM'2013, 2013, Warsaw, Poland
International audience; Campaign Scheduling is characterized by multiple job submissions issued from multiple users over time. This model perfectly suits today's systems since most available parallel environments have multiple users sharing a common
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::f598476bc172b6ccbaaf23a5f49e7f58
https://hal.inria.fr/hal-00918374/document
https://hal.inria.fr/hal-00918374/document
Publikováno v:
[Research Report] RR-LIG-040, LIG. 2013
Les rapports de recherche du LIG - ISSN: 2105-0422; In HPC community the System Utilization metric enables to determine if the resources of the cluster are efficiently used by the batch scheduler. This metric considers that all the allocated resource
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::b0c4532145e653afa41ab63c2d74a06d
https://hal.archives-ouvertes.fr/hal-01471983
https://hal.archives-ouvertes.fr/hal-01471983
Publikováno v:
[Research Report] RR-7755, INRIA. 2011
In the scientific experimentation process, an experiment result needs to be analyzed and compared with several others, potentially obtained in different conditions. Thus, the experimenter needs to be able to redo the experiment. Several tools are ded
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::9c490e9f7737ba11f664af7e3be5103d
https://inria.hal.science/inria-00630044/document
https://inria.hal.science/inria-00630044/document
Publikováno v:
Renpar
Renpar, 2011, Saint-Malo, France
Renpar, 2011, Saint-Malo, France
National audience
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::37120f611aa59d03c28e379bb2272224
https://hal.inria.fr/hal-00788805
https://hal.inria.fr/hal-00788805
Publikováno v:
2016 16th IEEE/ACM International Symposium on Cluster, Cloud & Grid Computing (CCGrid); 2016, p267-272, 6p