Large scale experiments

Large scale
experiments
Methodology, GUIDLINESS and THE DESIGN OF THE
EXPERIMENTS
I. Mlakar, D. Ceric, A. Lipaj,
Valladolid, 17/12/2014
Goals
 To establish the test bed
 In-depth testing and fine-tuning of particular components well as
the entire system
 Definition and development of testing protocols and procedures
 Deployment of outcomes (elements/components) of previous
WPs
 Setting up working prototypes as well as the system
 Delivery of the technical resume of the project
The goals will be achieved via the designed large scale experiments
performed on FIRE facilities
Experiments And Facilities
Experiment 1
 validating the technology and consistency of
wireless protocols
 CREW facilities: w-iLab.t testbed in Ghent
Experiment 2
 validating system’s scalability and related
bandwidth/computational power
 ONELAB Testbed
Experiment 3
 integrability with other ecosystems
 SmartSantander testbed
Experiment 1: CREW testbed
Measuring the influence of network interference on communication. How far can
we push the network prior to saturation?
Metrics:
Scenarios:
reliability, amounts of packets, sent/received, radio-sleep percentage, WiFi
throughput, message delay and application level events

Configuration scenario (interference generated by SandS system at start-up); simulating
appliances at different parts of a building; 9 physically integrated SandS powered
boards + series of iNodes configured as access-point

Single network setup(interference generated by SandS system operating under a stress scenario);
all the appliances in a kitchen are running and operating at full capacity; 9 physical
boards, + multiple virtual iNode-clients + 1 iNode configured as access-point

Multiple network setup(cross network interference generated by multiple SandS system
operating under regular/stress scenarios); multiple kitchens operating at the same time,
saturating the radio spectrum. 9 physical boards + multiple virtual iNode-clients +
multiple iNodes configured as access-point (operating on overlapping channels)
Implementation:
Month 27-28, 38 hours divided in two periods:
 first period: 4 hours->preparation time, 12 hours-> experiments;
 second period:2 hours->preparation time, 20 hours ->experiments)
Experiment 2: OPENLAB testbed
Measuring performance and scalability. How far can we extend the system and the
traffic it generates prior to being QoS unacceptable?
Metrics:
 DIappliance communication success
allocation/consumption, error/recovery rate
rate,
bandwidth/resource
 performance testing techniques (e.g. load testing, stress testing, soak
testing, spike testing, balancing test) executed via NEPI framework
Scenarios:
 logical testing (how system responds to realistic traffic and system load); simulating
appliances generating realistic traffic and realistic system load; 500 virtual
appliances + 10 physical appliances, a period of 10 days.
 communication testing (how system responds to gradual increase of traffic and
system load); progressive increase of the connected appliances; widespread
in the server basins; sensitivity of particular SandS components (e.g. NI, ESN,
EDB); fixed number of servers and monitored of virtual appliances and
simulated traffic.
Implementation:
Month 28-29, 18 days divided in two periods:
 first period: 10 days (logical testing)
 second period: 8 days (communication testing)
Experiment 3: SmartSantander testbed
Testing interoperability with semantically compatible ecosystems. How can
SmartSantander and SandS systems benefit by cooperating?
Metrics:  data mismatch, as for their content and address; DI‘s flexibility to receive
new signals, eefficiency of the integration
Scenarios:

Home rules (how SandS benefits from additional signals); Integration of signals provided by
the SmartCity system (e.g. Mobility sensors, Environmental sensors, irrigation
sensors, Traffic and parking sensors) into Home rules.

Appliances (how SandS benefits from power/water consumption expectations); Logical test of
Integration of city environment sensors, the short term weather forecasting and
similar sensors into DI scheduler and consistency checker.

Alarms (how SandS responds to and benefits from additional messages); Stress test of the
entire system by exposing it to alarm messages sent by the municipality
Implementation:
Month 29-30, 9 days divided in two periods:
 first period: 4 days
 second period: 5 days
Thank You for your attention!