A Model-Based System to Automate Cloud Resource Allocation and

A Model-Based System to
Automate Cloud Resource
Allocation and Optimization
CISC 836 – GRAYDON SMITH
Problem
Cloud providers offer quick and easy methods to allocate resources
Not a simple task to optimize
Not cost-effective
Pick several configurations and load test
Motivating Example
Hybrid 4-Dimensional Augmented Reality (HD4AR)
◦ Take picture
◦ upload to cloud
◦ Returned augmented image has information tagged to it
Originally designed for in-house, standalone use
Demand caused creation of HD4ARWebDataService
QoS requirements of 2000 active users/minute
Challenges
1) Translating high-level QoS goals into low-level load generator configuration
a.
b.
jMeter Thread groups
OS level details creates complexity
2) Customized test flows not supported or difficulty to verify
a.
b.
Testing single API at a time
Chaining of requests based upon logic
3) Resource bottleneck analysis
a.
b.
Difficult or non-existent
Correlate to QoS metrics
Challenges Continued
4) Lack of Model analysis to derive resource config strategy
a.
b.
Termination criteria
How much data is enough?
5) Lack of End-to-End testing of resource allocation, load generation, resource utilization metric
collection and QoS metric tracking
a.
b.
c.
d.
e.
Most systems generate load tests for single items
Focus on a few QoS metrics
How to test multi-tier configurations? (security groups, load-balancers, databases, etc)
Providers offer many resource types and parameters
E.g. Amazon AWS offers 50 resource types with hundreds of parameters
Solution (ROAR)
Model-based Resource Optimization, Allocation and Recommendation (ROAR) System
◦ Raises abstraction level when load-testing
◦ Textual DSL specifies testing plan – Generic Resource Optimization for Web applications Language
(GROWL)
◦ Implemented in XText
◦ DSL specifies QoS requirements and high-level test plan
◦ Generates jMeter XML configuration
◦ Code generator to automate allocation and testing of configurations
Addressing Challenges 1 & 2
GROWL addresses challenges by





Specifying QoS goals
Removing the need to configure Thread Groups, Logic Controllers, etc.
No need to know underlying jMeter details
jMeter thread count may not sufficiently test load or “ramp up” too quickly
Throughput shaping by steps
Comparison
Addressing Challenge 3
Automates Temporal Correlation of Resource Utilization and QoS Metrics
Throughput Shaper discretizes tests into temporal blocks
Lines up server resource usage with QoS measurements
Only looks at CPU, memory, network I/O, and disk I/O
Step 4 of model
Addressing Challenge 4
Filtering step (Step 5 of the process)
N = [ Tex / Tp ]
N : number of servers
Tex : Expected throughput
Tp : Actual peak throughput
Problems:
 Reasonable Tp from all data points
 Determined by a resource reaching near 100% utilization and dramatic latency increase
Repeat process for each configuration
Addressing Challenge 5
Different applications have a variety of configurations e.g. Libraries and environment
Package application using Docker
 Virtualized packaging
 Versioned in Git
Currently only support Amazon AWS
Plans for OpenStack
Case Study
Experiments outlined
 Based on the motivating example
 2400 requests/sec
 5000 requests/sec
Conclusion/Discussion
Motivations and problems clearly defined
Presents a clear process for assessing cloud configurations and resource configuration
Low detail on GROWL Xtext model
Multiple case studies?
References
Y. Sun, J. White, S. Eade. A Model-Based System to Automate Cloud Resource Allocation and
Optimization. Model-Driven Development of Mobile Applications Allowing Role-Driven Variants.
17th International Conference on Model-Driven Engineering Languages and Systems
(MoDELS'14). Springer LNCS 8767. Pages 18-34.
Amazon Web Services (2014), http://aws.amazon.com/
Apache JMeter (2014), https://jmeter.apache.org/