Difference between revisions of "AMGCC15 Program"
(→Workshop Program | Workshop Main Page) |
|||
(20 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
* '''Location''': Boston Marriott Cambridge, Cambridge, MA, USA | * '''Location''': Boston Marriott Cambridge, Cambridge, MA, USA | ||
* '''Date''': September 21 (Monday), 2015 | * '''Date''': September 21 (Monday), 2015 | ||
− | * '''Time''': 08:30 AM - 06:00 PM | + | * '''Time''': 08:30 AM - 06:00 PM (Salon VII) |
− | This workshop features two keynotes by Dr. Kate Keahey (Argonne National Laboratory/University of Chicago) and Dr. | + | This workshop features two keynotes by Dr. Alan Edelman (Massachusetts Institute of Technology) and Dr. Kate Keahey (Argonne National Laboratory/University of Chicago), two invited talks by Robert Quick (Indiana University) and Dr. Hyeonsang Eom (Seoul National University), and oral presentations of 7 full papers. |
{| class="wikitable" | {| class="wikitable" | ||
Line 24: | Line 24: | ||
|- align="center" | |- align="center" | ||
− | | width="10%"| 09:10 - | + | | width="10%"| 09:10 - 10:00 |
− | | width="40%"| '''Keynote''': | + | | width="40%"| '''Keynote''': Julia: A fresh approach to parallel programming |
| width="20%"| Alan Edelman | | width="20%"| Alan Edelman | ||
− | | width="20%"| | + | | width="20%"| MIT |
− | | rowspan="2"| | + | | rowspan="2"| Hyeonsang Eom |
|- align="center" | |- align="center" | ||
− | | width="10%"| | + | | width="10%"| 10:00 - 10:45 |
| width="40%"| '''Invited Talk''': Autonomy in the Open Science Grid or Pay No Attention to the Operator Behind the Curtains | | width="40%"| '''Invited Talk''': Autonomy in the Open Science Grid or Pay No Attention to the Operator Behind the Curtains | ||
| width="20%"| Robert Quick | | width="20%"| Robert Quick | ||
Line 38: | Line 38: | ||
|- align="center" | |- align="center" | ||
− | | 10: | + | | 10:45 - 11:00 |
| colspan="4" align="center"| Coffee Break | | colspan="4" align="center"| Coffee Break | ||
Line 67: | Line 67: | ||
|- align="center" | |- align="center" | ||
| width="10%"| 14:00 - 14:45 | | width="10%"| 14:00 - 14:45 | ||
− | | width="40%"| '''Keynote''': | + | | width="40%"| '''Keynote''': Chameleon: A Large-scale, Reconfigurable Experimental Environment for Next Generation Cloud Research |
| width="20%"| Kate Keahey | | width="20%"| Kate Keahey | ||
| width="20%"| Argonne National Laboratory/University of Chicago | | width="20%"| Argonne National Laboratory/University of Chicago | ||
− | | rowspan="3"| | + | | rowspan="3"| Jaehwan Lee |
|- align="center" | |- align="center" | ||
| 14:45 - 15:15 | | 14:45 - 15:15 | ||
− | | | + | | An Empirical Evaluation of NVM Express SSD |
− | | | + | | Yongseok Son, Hara Kang, Hyuck Han and Heon Young Yeom |
− | | | + | | Seoul National University, Dongduk women's university |
|- align="center" | |- align="center" | ||
Line 91: | Line 91: | ||
|- align="center" | |- align="center" | ||
| width="10%"| 16:00 - 16:45 | | width="10%"| 16:00 - 16:45 | ||
− | | width="40%"| '''Invited Talk''': | + | | width="40%"| '''Invited Talk''': How can we allocate “right” resources to virtual machines in virtualized data centers? – Workload-aware hierarchical scheduling with OpenStack |
| width="20%"| Hyeonsang Eom | | width="20%"| Hyeonsang Eom | ||
| width="20%"| Seoul National University | | width="20%"| Seoul National University | ||
− | | rowspan="3"| | + | | rowspan="3"| Yoonhee Kim |
|- align="center" | |- align="center" | ||
Line 105: | Line 105: | ||
|- align="center" | |- align="center" | ||
| 17:15 - 17:45 | | 17:15 - 17:45 | ||
− | | | + | | Feasibility of the Computation Task Offloading to GPGPU-enabled Devices in Mobile Cloud |
− | | | + | | Kihan Choi, Jaehoon Lee, Youngjin Kim, Sooyong Kang and Hyuck Han |
− | | | + | | Hanyang University, Dongduk Women’s University |
|- align="center" | |- align="center" | ||
Line 114: | Line 114: | ||
|} | |} | ||
− | == | + | == Keynotes == |
− | === | + | === Julia: A fresh approach to parallel programming === |
+ | ==== Talk Abstract ==== | ||
+ | |||
+ | The Julia programming language is gaining enormous popularity. Julia was designed to be easy and fast. Most importantly, Julia shatters deeply established notions widely held in the applied community. Julia shows the fascinating dance between specialization and abstraction. Specialization allows for custom treatment. We can pick just the right algorithm for the right circumstance and this can happen at runtime based on argument types (code selection via multiple dispatch). Abstraction recognizes what remains the same after differences are stripped away and ignored as irrelevant. The recognition of abstraction allows for code reuse (generic programming). A simple idea that yields incredible power. Julia is many things to many people. In this talk we describe how Julia was built on the heels of our parallel computing experience with Star-P which began as an MIT research project and was a software product of Interactive Supercomputing. Our experience taught us that bolting parallelism onto an existing language that was not designed for performance or parallelism is difficult at best, and impossible at worst. One of our (not so secret) motivations to build Julia was to have the language we wanted for parallel numerical computing. | ||
+ | |||
+ | === Chameleon: A Large-scale, Reconfigurable Experimental Environment for Next Generation Cloud Research === | ||
==== Talk Abstract ==== | ==== Talk Abstract ==== | ||
+ | |||
+ | Cloud services have become ubiquitous to all major 21st century economic activities -- there are however still many open questions surrounding this new technology. IN particular, many open research questions concern the relationship between cloud computing and high performance computing, the suitability of cloud computing for data-intensive applications, and its position with respect to emergent trends such as Software Defined Networking. A persistent barrier to further understanding of those issues has been the lack of a large-scale testbed where they can be explored. | ||
+ | |||
+ | With funding from the National Science Foundation (NSF), the Chameleon project provides such a large-scale platform to the open research community allowing them to explore transformative concepts in deeply programmable cloud services, design, and core technologies. The testbed, deployed at the University of Chicago and the Texas Advanced Computing Center, will ultimately consist of almost 15,000 cores, 5PB of total disk space, and leverage 100 Gbps connection between the sites. While a large part of the testbed will consist of homogenous hardware to support large-scale experiments, a portion of it will support heterogeneous units allowing experimentation with high-memory, large-disk, low-power, GPU, and co-processor units. To support a broad range of experiments, the project will support a graduated configuration system allowing full user configurability of the software stack, from provisioning of bare metal and network interconnects to delivery of fully functioning cloud environments. This talk will describe the goals, the building, and the modus operandi of the testbed. | ||
== Invited Talks == | == Invited Talks == | ||
Line 128: | Line 137: | ||
The Open Science Grid (OSG) is a distributed computational facility providing resources for High Throughput Computing (HTC) workflows. These resources are located at 125 locations across North America and South America, with minor extensions to Asia, Europe, and Africa. By nature, a distributed computing facility of this extent is a chaotic ecosystem, with scheduled and unscheduled outages, network fluctuations, and resource and policy autonomy. To consolidate this system into a functional operational environment the OSG uses a variety of technical and social techniques. These include central operational services that provide dynamic snapshots of the state of OSG, continuous monitoring and subsequent self-repairing actions, active 24x7 tracking and troubleshooting of critical production issues, automatically managed glide-in based workflows, and high availability operational services. This talk will cover and introduction to the OSG, discussion of the scale and chaotic nature of the environment, and techniques used to provide autonomic production quality service. | The Open Science Grid (OSG) is a distributed computational facility providing resources for High Throughput Computing (HTC) workflows. These resources are located at 125 locations across North America and South America, with minor extensions to Asia, Europe, and Africa. By nature, a distributed computing facility of this extent is a chaotic ecosystem, with scheduled and unscheduled outages, network fluctuations, and resource and policy autonomy. To consolidate this system into a functional operational environment the OSG uses a variety of technical and social techniques. These include central operational services that provide dynamic snapshots of the state of OSG, continuous monitoring and subsequent self-repairing actions, active 24x7 tracking and troubleshooting of critical production issues, automatically managed glide-in based workflows, and high availability operational services. This talk will cover and introduction to the OSG, discussion of the scale and chaotic nature of the environment, and techniques used to provide autonomic production quality service. | ||
+ | |||
+ | === How can we allocate “right” resources to virtual machines in virtualized data centers? – Workload-aware hierarchical scheduling with OpenStack === | ||
+ | |||
+ | ==== Talk Abstract ==== | ||
+ | |||
+ | Data centers have been becoming larger and more heterogeneous, possibly being highly distributed. It is crucial to manage many heterogeneous resources effectively to efficiently and cost-effectivity provide services; it is necessary to allocate “right” resources to Virtual Machines (VMs) in virtualized data centers in order to decrease the cost of the operation while meeting the SLAs (Service Level Agreements) such as guaranteeing the latency requirement. One of the most effective ways to allocate “right” resources to a VM would be to do it considering the characteristics of the VM such as the memory intensiveness of the workload executed in the VM. However, the existing schedulers do not consider these kinds of characteristics, including the NOVA scheduler of OpenStack and DRS (Distributed Resource Scheduler) of VMWare. We propose a workload-ware hierarchical scheduler that schedules VMs on OpenStack clusters of nodes, considering the characteristics of workload executed in the VMs and the hierarchy of the resources to be allocated. Our experimental study shows that our memory-intensiveness-aware scheduler may outperform the default scheduler of OpenStack and DRS as well in terms of throughput and latency. |
Latest revision as of 14:05, 21 September 2015
Workshop Program | Workshop Main Page
- Location: Boston Marriott Cambridge, Cambridge, MA, USA
- Date: September 21 (Monday), 2015
- Time: 08:30 AM - 06:00 PM (Salon VII)
This workshop features two keynotes by Dr. Alan Edelman (Massachusetts Institute of Technology) and Dr. Kate Keahey (Argonne National Laboratory/University of Chicago), two invited talks by Robert Quick (Indiana University) and Dr. Hyeonsang Eom (Seoul National University), and oral presentations of 7 full papers.
Time | Description | Presenter | Institution | Session Chair |
---|---|---|---|---|
08:30 - 09:00 | ICCAC 2015 Registration and Check-in | |||
09:00 - 09:10 | Welcome Remarks by AMGCC workshop organizers | |||
09:10 - 10:00 | Keynote: Julia: A fresh approach to parallel programming | Alan Edelman | MIT | Hyeonsang Eom |
10:00 - 10:45 | Invited Talk: Autonomy in the Open Science Grid or Pay No Attention to the Operator Behind the Curtains | Robert Quick | Indiana University | |
10:45 - 11:00 | Coffee Break | |||
11:00 - 11:30 | Performance Analysis of Loosely Coupled Applications in Heterogeneous Distributed Computing Systems | Eunji Hwang, Seontae Kim, Tae-Kyung Yoo, Jik-Soo Kim, Soonwook Hwang and Young-Ri Choi | Ulsan National Institute of Science and Technology, KISTI | Jik-Soo Kim |
11:30 - 12:00 | Fine-Grained, Adaptive Resource Sharing for Real Pay-Per-Use Pricing in Clouds | Young Choon Lee, Youngjin Kim, Hyuck Han and Sooyong Kang | Macquarie University, Hanyang University, Dongduk Women’s University | |
12:00 - 12:30 | A Job Dispatch Optimization Method on Cluster and Cloud for Large-scale High-Throughput Computing Service | Jieun Choi, Seoyoung Kim, Theodora Adufu, Soonwook Hwang and Yoonhee Kim | Sookmyung Women's University, KISTI | |
12:30 - 14:00 | Lunch | |||
14:00 - 14:45 | Keynote: Chameleon: A Large-scale, Reconfigurable Experimental Environment for Next Generation Cloud Research | Kate Keahey | Argonne National Laboratory/University of Chicago | Jaehwan Lee |
14:45 - 15:15 | An Empirical Evaluation of NVM Express SSD | Yongseok Son, Hara Kang, Hyuck Han and Heon Young Yeom | Seoul National University, Dongduk women's university | |
15:15 - 15:45 | SCOUT: A Monitor & Profiler of Grid Resources for Large-Scale Scientific Computing | Md Azam Hossain, Hieu Trong Vu, Jik-Soo Kim, Myungho Lee and Soonwook Hwang | University of Science & Technology, KISTI, Myongji University | |
15:45 - 16:00 | Coffee Break | |||
16:00 - 16:45 | Invited Talk: How can we allocate “right” resources to virtual machines in virtualized data centers? – Workload-aware hierarchical scheduling with OpenStack | Hyeonsang Eom | Seoul National University | Yoonhee Kim |
16:45 - 17:15 | A CPU Overhead-aware VM Placement Algorithm for Network Bandwidth Guarantee in Virtualized Data Centers | Kwonyong Lee and Sungyong Park | Sogang University | |
17:15 - 17:45 | Feasibility of the Computation Task Offloading to GPGPU-enabled Devices in Mobile Cloud | Kihan Choi, Jaehoon Lee, Youngjin Kim, Sooyong Kang and Hyuck Han | Hanyang University, Dongduk Women’s University | |
17:45 - 18:00 | Closing Remarks |
Keynotes
Julia: A fresh approach to parallel programming
Talk Abstract
The Julia programming language is gaining enormous popularity. Julia was designed to be easy and fast. Most importantly, Julia shatters deeply established notions widely held in the applied community. Julia shows the fascinating dance between specialization and abstraction. Specialization allows for custom treatment. We can pick just the right algorithm for the right circumstance and this can happen at runtime based on argument types (code selection via multiple dispatch). Abstraction recognizes what remains the same after differences are stripped away and ignored as irrelevant. The recognition of abstraction allows for code reuse (generic programming). A simple idea that yields incredible power. Julia is many things to many people. In this talk we describe how Julia was built on the heels of our parallel computing experience with Star-P which began as an MIT research project and was a software product of Interactive Supercomputing. Our experience taught us that bolting parallelism onto an existing language that was not designed for performance or parallelism is difficult at best, and impossible at worst. One of our (not so secret) motivations to build Julia was to have the language we wanted for parallel numerical computing.
Chameleon: A Large-scale, Reconfigurable Experimental Environment for Next Generation Cloud Research
Talk Abstract
Cloud services have become ubiquitous to all major 21st century economic activities -- there are however still many open questions surrounding this new technology. IN particular, many open research questions concern the relationship between cloud computing and high performance computing, the suitability of cloud computing for data-intensive applications, and its position with respect to emergent trends such as Software Defined Networking. A persistent barrier to further understanding of those issues has been the lack of a large-scale testbed where they can be explored.
With funding from the National Science Foundation (NSF), the Chameleon project provides such a large-scale platform to the open research community allowing them to explore transformative concepts in deeply programmable cloud services, design, and core technologies. The testbed, deployed at the University of Chicago and the Texas Advanced Computing Center, will ultimately consist of almost 15,000 cores, 5PB of total disk space, and leverage 100 Gbps connection between the sites. While a large part of the testbed will consist of homogenous hardware to support large-scale experiments, a portion of it will support heterogeneous units allowing experimentation with high-memory, large-disk, low-power, GPU, and co-processor units. To support a broad range of experiments, the project will support a graduated configuration system allowing full user configurability of the software stack, from provisioning of bare metal and network interconnects to delivery of fully functioning cloud environments. This talk will describe the goals, the building, and the modus operandi of the testbed.
Invited Talks
Autonomy in the Open Science Grid or Pay No Attention to the Operator Behind the Curtains
Talk Abstract
The Open Science Grid (OSG) is a distributed computational facility providing resources for High Throughput Computing (HTC) workflows. These resources are located at 125 locations across North America and South America, with minor extensions to Asia, Europe, and Africa. By nature, a distributed computing facility of this extent is a chaotic ecosystem, with scheduled and unscheduled outages, network fluctuations, and resource and policy autonomy. To consolidate this system into a functional operational environment the OSG uses a variety of technical and social techniques. These include central operational services that provide dynamic snapshots of the state of OSG, continuous monitoring and subsequent self-repairing actions, active 24x7 tracking and troubleshooting of critical production issues, automatically managed glide-in based workflows, and high availability operational services. This talk will cover and introduction to the OSG, discussion of the scale and chaotic nature of the environment, and techniques used to provide autonomic production quality service.
How can we allocate “right” resources to virtual machines in virtualized data centers? – Workload-aware hierarchical scheduling with OpenStack
Talk Abstract
Data centers have been becoming larger and more heterogeneous, possibly being highly distributed. It is crucial to manage many heterogeneous resources effectively to efficiently and cost-effectivity provide services; it is necessary to allocate “right” resources to Virtual Machines (VMs) in virtualized data centers in order to decrease the cost of the operation while meeting the SLAs (Service Level Agreements) such as guaranteeing the latency requirement. One of the most effective ways to allocate “right” resources to a VM would be to do it considering the characteristics of the VM such as the memory intensiveness of the workload executed in the VM. However, the existing schedulers do not consider these kinds of characteristics, including the NOVA scheduler of OpenStack and DRS (Distributed Resource Scheduler) of VMWare. We propose a workload-ware hierarchical scheduler that schedules VMs on OpenStack clusters of nodes, considering the characteristics of workload executed in the VMs and the hierarchy of the resources to be allocated. Our experimental study shows that our memory-intensiveness-aware scheduler may outperform the default scheduler of OpenStack and DRS as well in terms of throughput and latency.