UNIVERSITY OF ULSTER
UNIVERSITY EXAMINATIONS 2015/2016
Semester 1 Examinations
SOLUTIONS
BSc(Hons)
Computing Science
Final Year
Enterprise Computing
Module COM580
CRN 9676
Module Coordinator: Dr Tom Lunney
1
Question 1
(a) “Best practice in object-oriented design often requires a developer to favor
Composition over Inheritance.” Explain what is meant by the terms inheritance and
composition and provide a rationale to justify the above statement.
(8 marks)
SOLUTION
Inheritance: In object-oriented programming, inheritance enables new objects to take on the properties of
existing objects. A class that is used as the basis for inheritance is called a superclass or base class. A
class that inherits from a superclass is called a subclass or derived class. Different kinds of objects often
have a certain amount in common with each other. Mountain bikes, road bikes, and tandem bikes, for
example, all share the characteristics of bicycles (current speed, current pedal cadence, current gear).
Yet each also defines additional features that make them different: tandem bicycles have two seats and
two sets of handlebars; road bikes have drop handlebars; some mountain bikes have an additional chain
ring, giving them a lower gear ratio. Object-oriented programming allows classes to inherit commonly
used state and behaviour from other classes. In this example, Bicycle now becomes the superclass of
MountainBike, RoadBike, and TandemBike.
Composition: Composition put simply means that an object contains a set of other objects. These
objects are often a base class type, allowing the containing class to use any derived concrete class at
run time. A real-world example of composition may be seen in the relation of an automobile to its parts,
specifically: the automobile 'has or is composed from' objects including steering wheel, seat, gearbox
and engine. When, in a language, objects are typed, types can often be divided into composite and noncomposite types, and composition can be regarded as a relationship between types: an object of a
composite type (e.g. car) "has an" object of a simpler type (e.g. wheel). In programming languages,
composite objects are usually expressed by means of references from one object to another; depending
on the language, such references may be known as fields, members, properties or attributes, and the
resulting composition as a structure, storage record, tuple, user-defined type (UDT), or composite type.
Inheritance is brittle, because the subclass can easily make assumptions about the context in which a
method it overrides is getting called. There’s a tight coupling between the base class and the subclass,
which you need to be aware of.
Composition has a nicer property. The coupling is reduced by just having some smaller things you plug
into something bigger, and the bigger object just calls the smaller object back. From an API point of view
defining that a method can be overridden is a stronger commitment than defining that a method can be
called, so composition has an advantage as it leads to looser coupling.
(b) Explain what is meant by Dependency Injection.
SOLUTION
(6 marks)
Dependency injection is a software design pattern in which one or more dependencies (or services) are
injected, or passed by reference, into a dependent object (or client) and are made part of the client's
state. The pattern separates the creation of a client's dependencies from its own behaviour, which allows
program designs to be loosely coupled.
Dependency injection involves four elements:
the implementation of a service object;
the client object depending on the service;
the interface the client uses to communicate with the service;
the injector object, which is responsible for injecting the service into the client.
The injector object may also be referred to as an assembler, provider, container, or factory.
(c) Consider the following snippet of code, which uses dependency injection:
public class Example
{
private DatabaseThingie myDatabase;
public Example(DatabaseThingie useThisDatabaseInstead)
{
myDatabase = useThisDatabaseInstead;
}
public void DoStuff() {
...
myDatabase.GetData();
...
2
}
}
Discuss the role/activity of the dependency injection in this code and explain how it
leads to looser coupling.
(5 marks)
SOLUTION
This code passes the variable that references the outside dependency into the Example class via the
constructor. This "injects" the "dependency" into the class. Now when we use the variable
useThisDatabaseInstead (dependency variable), we use the object that we were given rather than
creating our own object. This injected object will only have been created according to an interface that is
of the same type as DatabaseThingie.
This facilitates a looser coupling (a desirable feature) between the two classes as we do not now have to
worry about how to create the class that we are depending upon. Instead an object of DatabaseThingie
type is created outside our Example class and injected in as a parameter.
(d) Dependency Injection has advantages when testing software as it allows you to isolate
classes. Illustrate via an appropriate code snippet (based on the above example in
2(c)), how this class isolation can occur during testing.
(6 marks)
SOLUTION
public class ExampleTest {
TestDoStuff() {
MockDatabase mockDatabase = new MockDatabase();
// MockDatabase is a subclass of DatabaseThingie, so we can "inject" it here
Example example = new Example(mockDatabase);
example.DoStuff();
mockDatabase.AssertGetDataWasCalled();
}
}
In ExampleTest we are testing (and hence injecting into Example class) an object of type MockDatabase.
As part of our testing we could switch to a different database by substituting any class that implements
the DatabaseThingie interface. Because we are using dependency injection we do not have to make any
changes to the Example class and hence the Example class is isolated from any class that we may inject
into it.
Question 2
(a) In the context of enterprise system development explain what is meant by the following
terms:
(i) REpresentational State Transfer (REST)
(ii) Service Oriented Architecture (SOA)
(iii) The Object-Relational mis-match
(9 marks)
SOLUTION (i) - 3 marks
REST is an acronym standing for REpresentational State Transfer and is rather like browsing. Although
the HTTP protocol does not in itself have state information, the current web page is in effect a kind of
state. If pages are arranged in a tree then this can be traversed exactly as if it were XML and the node
attributes can be the page contents. This is pretty much an exact equivalence (of a RESTful application).
Although REST is more an idea than a standard it needs standards to work. Obviously it works over the
Internet and uses HTTP, it also adheres to the standards for locating web resources, that is URLs, and
most often it returns pages in XML, although this is not always the case. The XML is usually very simple
in contrast to SOAP. Suppose a REST web service was being designed to give information about films
now showing. The first page might be a set of towns, and each of these would have a links to pages for
each cinema in each town, and each of these would link to pages on the films showing at each cinema.
This tends to make for a lot of web pages, but they are very simple, often contain simple XML, and can
be generated automatically.
SOLUTION (ii) - 3 marks
3
Object oriented programming is now a mainstream approach in systems design. The approach aims to
encapsulate complexity and promote the use and reuse of components. A sound design maximises the
internal coherence of individual components and minimises the coupling between them. Recent
developments such as dependency injection allow this coupling to be reconfigured without any change to
the objects themselves. The use of design patterns and contract-based programming (interface first)
have led to system that are far easier to understand and maintain than those used in the past. The
resulting architectural design is known as a Service Oriented Architecture (SOAs). SOA is a component
based approach to developing your system where there are a range of components covering the different
functionality of your system which communicate via minimal interfaces.
SOLUTION (iii) - 3 marks
Relational databases allow data to be structured such that information is held in one place (table) and
one place only. This is of crucial importance as if data is held in more than one place it rapidly becomes
inconsistent and leads to synchronization issues that are amongst the messiest of computer science
problems to solve. The idea of a record which is uniquely defined by a simple or compound key has a
long history in data storage, indeed it was much used in paper filing systems, and the concept of relations
as flat tables is a useful and easily understood paradigm for holding data.
We are encouraged to design objects that contain (compose) sets or sequences of other objects, which
may themselves compose further objects. In relational database technology we are required to normalise
tables so as to remove any such sets or lists. The structure of the two representations is therefore quite
different and this has been termed an “impedance mismatch”. Hence we need a layer to transform one
representation to the other and to work in both directions.
(b) .NET WCF based Services are exposed through endpoints. Endpoints are a
combination of three aspects, namely address, contract and binding that lie between
service providers and service consumers. Explain the role of each of these three
aspects.
(6 marks)
SOLUTION
Address: In WCF, every service is associated with a unique address that tells clients where the service is
hosted. The address provides two important elements: the location of the service and the transport
protocol (transport schema) used to communicate with the service. Addresses always have the following
format:
[Transport]:// [machine or domain][:optional port]/path
Here are a few sample addresses:
https://localhost:8080/Secureservice
TCP addresses use net.tcp for the transport, and typically include a port number. When a port number is
not specified, the TCP address defaults to port 808.
HTTP addresses use HTTP for transport, and can also use HTTPS for secure transport. When a port
number is unspecified, it defaults to 80.
IPC (Inter-Process Communication) addresses use net.pipe for transport, to indicate the use of the
Windows named piped mechanism. In WCF, services that use named pipes can only accept calls from the
same machine.
Contract: The contract is a platform-neutral and standard way of describing what the service does. WCF
defines four types of contracts
• Data contracts define which data types are passed to and from the service. WCF
implicitly defines contracts for built-in types such as int and string, but you can easily define explicitly
opt-in data contracts for custom types.
• Service contracts describe which operations the client can perform on the service.
• Fault contracts define which errors are raised by the service, and how the service handles and
propagates error to its clients.
• Message contracts allow the service to interact directly via messages. Message contracts can be typed
or un-typed, and are useful in interoperability cases and when there is an existing message format you
have to comply with.
Binding: Binding simply tells clients how to consume services. It allows a data contract to be exchanged
between clients and the service provider. Binding is actually a collection of binding methods that define
how messages should be exchanged over the service. Binding methods address issues such as message
encoding, the transport protocol and security options. Knowing what binding specification the service
provider uses allows clients to communicate with it.
4
(c) The ADO.NET Entity Framework is an extended object relational mapping (ORM) tool
from Microsoft. As a developer with the Entity Framework you can develop your
application using (1) Model First approach, (2) Code First approach. Explain the
operation of both of these approaches.
(5 marks)
SOLUTION
NET Entity Framework (EF) - Model First
Model-first implies that you start by designing a model of your program as opposed to writing C# code. EF
supports this with the Entity Designer. With the Entity Designer, you can define entities which are
abstractions representing the objects (later implemented in C# code) in the application domain. This is
where you create the Entity Data Model (EDM) and it is driven by an XML grammar from a model file
extension of .EDMX.
Using the Model First approach, I might create an Order entity that will represent instances of orders and
their details such as who placed the order, on what date and so on. Then I might create related entities to
hold additional information such as the items in the order and their cost and quantities. The modelling
experience gives me a natural way to build up a description of the data in my application visually, playing
around with it and changing it. The real power of this comes when I want to turn the entity data model into
reality. That's because EF can take the EDM and generate two things from it
(1) Data Definition Language (DDL) representation to create a database as the model's concrete
representation
(2) Data access code (C# classes to be used by your program) to manipulate it.
NET Entity Framework (EF) - Code First
The second option is code-first, where you start with C# classes (often called access code) to represent
the entities in your application. With this approach, there is no explicit Entity Data Model (EDM) and the
database is generated from the data access code (C# code) that you write. With code-first you write the
code you want as plain C# classes, then the implicit EF models are inferred from that C# code at runtime. These models can be used to generate the database as well as provide the mapping from your
hand-written C# classes. This is referred to as “code first (to a new database)”. Just because you can’t
see the EDM though doesn’t mean it’s not there. The metadata is still created under the covers and the
code you write (with conventions that must be followed) is used to create it at runtime.
If you are using the “code first” approach (i.e. want to work with C# code as opposed to the EDM) and the
database already exists then you can use the Entity Framework Power Tools to reverse engineer a “code
first” model (i.e. C# code) from an existing database. This mode of operation is called “code first (to an
existing database)”.
(d) The Entity Framework provides LINQ to Entities as a way to query a conceptual
(domain) model and return objects represented in your C# code. Briefly explain via an
outline example how this is done.
(5 marks)
SOLUTION
You can use LINQ query syntax for querying with the EDM.
using (var context = new SchoolDBEntities())
{
var L2EQuery = from st in context.Students
where st.StudentName == "Bill"
select st;
}
First, you have to create object of context class which is SchoolDBEntities.
You should initialize it in using() so that once it goes out of using scope then it will automatically call the
Dispose() method of DbContext.
You can also use LINQ Method Syntax(Lamda expression) for querying
using (var context = new SchoolDBEntities())
{
var L2EQuery = context.Students.where(s => s.StudentName == “Bill”);
}
In both the syntax above, context returns IQueryable.
5
Question 3
(a) In the context of Service Oriented Architecture(SOA) explain the following and
illustrate the relationship between them:
(i) Reference Architecture
(ii) Enterprise Architecture
(iii) Solution Architecture
(9 marks)
SOLUTION
SOA is a reference architecture that guides and constraints solution architectures.
The reference architecture can have different scopes such as Enterprise architecture, Project architecture,
Software architecture, and so on.
Reference Architecture
A reference architecture provides a template solution for an architecture for a particular domain. It also
provides a common vocabulary with which to discuss implementations, often with the aim to stress
commonality.
Using a reference architecture has several advantages:
Standardization of terminology, taxonomy, and services eases working with suppliers and partners
It reduces the cost of developing for example, enterprise architecture. The organization can focus on
what sets it apart from the reference architecture and other organizations in the industry instead of
reinventing the wheel.
It makes it easier to implement commercial off-the-shelf (COTS) software because common
terminology and processes are used.
Obviously, it also has some disadvantages:
It takes some time to learn the industry reference architecture;
It can be often (over) complete.
The reference architecture is typically written by a group of people from different organizations and
so is the result of a compromise.
Enterprise Architecture
Enterprise Architecture is the organizing logic for business processes and IT
infrastructure, reflecting the integration and standardization requirements of the company's operating
model. The enterprise architecture provides a long term view of a company's processes, systems, and
technologies.
The Zachman framework is a framework for structuring the enterprise architecture and consists of a
matrix of 6 x 6.
Solution Architecture
Solution Architecture is a detailed (technology) specification of building blocks to realise a business need.
Open Group recognizes different types of solutions in the solution continuum:
Foundation solutions: This can be a programming language, a process or other highly generic concepts,
tools, products, and services.
Common systems solutions: For example CRM systems, ERP systems as we have seen in the previous
examples, and also security solutions.
6
Industry solutions: These are solutions for a specific industry. They are built from foundation solutions and
common systems solutions, and are augmented with industry-specific components.
Organization-specific solutions: An example of this is the solution for the health insurance companies that
want to offer self-service to prospective clients. The solution architecture describes the multi-channel
solution for the organization, the tools and products that are used to implement it, and the relationship
between the different layers.
(b) An international software company wants to change the way the order-to-cash process
is executed. The company has started to sell their products online, and the customer
can download the product after paying for it online. This means that the process orderto-cash needs to be adjusted—in this case the customer has to pay upfront, instead of
after receiving the product, which is what happened in the traditional way. Rather than
changing the existing process to accommodate online purchases, the company decides
to create a whole new application for the online business to run along the existing
system.
Why might the company take this approach and what are the problems this decision
may potentially cause going forward?
(4 marks)
SOLUTION
The process logic (the order of the steps) is coded into the custom application that the organization uses
for this process. Therefore, changing the process impacts the entire application. This is expensive and
very disruptive for day-to-day operations because it is one of the core processes of the company. Rather
than changing the existing process to accommodate online purchases, the company decides to create a
whole new application, thus creating a problem with data synchronization, customer service, and
management information.
Since there is no clear separation in the application between the different process logic components they
cannot easily be taken out or replaced. This will lead to misalignment of business and IT, and duplication
of functionality and data. In this example IT can't keep up with process changes because of the way the
applications are structured and it solves this with data duplication and functional duplication, thus creating
more problems for the future.
(c) Outline how the software company business outlined in (b) could be redesigned using a
service oriented approach.
(7 marks)
SOLUTION
You can realize the order-to-cash business process by orchestrating the use of services (capabilities) in a
particular order.
The key concept here is that if you want the business process to be flexible, to be able to make changes
quickly or to reuse existing functionality and data, the use of services has several benefits.
The following figure displays the order-to-cash business process with:
its process steps on the one hand (top),
the services that are orchestrated to realize the business process on the other hand (bottom),
the usage of services by the process (dotted lines).
The following steps are executed:
1. A new order is received. (this event starts the process).
2. The order is booked using OrderService.
3. The order is fulfilled using OrderService.
7
4. The goods are distributed using TransportService and the CustomerService.
5. The customer is billed using CustomerService, BillingService, and DocumentService.
6. If the customer does not pay, dunning is started using the DunningService.
(d) Explain how the service oriented design approach could facilitate both the online and
offline business.
SOLUTION
The company now decides to use the OrderService offered by the packaged SOA/Web application.
This service can be used to create order entries, retrieve order information, cancel orders, and so on.
Both the company's online web application and the Customer Care Portal will start to use OrderService.
The Order System offers the interface to the web application and to the customer portal. These systems
don't need to duplicate the logic or the data of the Order System, as was the case previously. Both
systems use the implementation of OrderService by accessing the interface that the Order System offers.
Question 4
(a) Windows Communication Foundation (WCF) is becoming a replacement for Web
Services on the .NET platform outline the functional scope of this technology.
SOLUTION
(6 marks)
WCF unifies the capabilities into single, common, general service oriented programming model for
Communication. WCF provides a common approach using a common API which developers can focus
on their application rather than on communication protocol.
With WCF, we can define our service once and then configure it in such a way that it can be used via
HTTP, TCP, IPC, and even Message Queues. We can consume Web Services using server side scripts
(ASP.NET), JavaScript Object Notations (JSON), and even REST (Representational State Transfer).
(b) Explain how communication takes place with .NET WCF, paying particular attention to
what is known as the A, B,C of WCF services.
(7 marks)
SOLUTION
All communications with the WCF service will happen via the endpoints. The endpoints specify a Contract
that defines which methods of the Service class that will be accessible via the endpoint; each endpoint
may expose a different set of methods. The endpoints also define a binding that specifies how a client will
communicate with the service and the address where the endpoint is hosted.
8
So if we want to use a WCF service from an application, then we have three major questions:
Where is the WCF service located from a client’s perspective?
How can a client access the service, i.e., protocols and message formats?
What is the functionality that a service is providing to the clients?
Once we have the answer to these three questions, then creating and consuming the WCF service will
be a lot easier for us. The answer to these above questions is what is known as the ABC of WCF services
and in fact are the main components of a WCF service. So let’s consider each question one by one.
Address: Like a web service, a WCF service also provides a URI which can be used by clients to get
to the WCF service. This URI is called as the Address of the WCF service. This will solve the first
problem of “where to locate the WCF service?” for us.
Binding: Once we are able to locate the WCF service, we should think about how to communicate
with the service (protocol wise). The binding is what defines how the WCF service handles the
communication. It could also define other communication parameters like message encoding, etc.
This will solve the second problem of “how to communicate with the WCF service?” for us.
Contract: Now the only question we are left with is about the functionalities that a WCF service
provides. Contract is what defines the public data and interfaces that WCF service provides to the
clients.
(c) The interface code for a WCF service is shown below:
namespace LINQNorthwindService
{
[ServiceContract]
public interface IProductService
{
[OperationContract]
[FaultContract(typeof(ProductFault))]
Product GetProduct(int id);
[OperationContract]
[FaultContract(typeof(ProductFault))]
bool UpdateProduct(ref Product product, ref string message);
}
[DataContract]
public class Product
{
[DataMember]
public int ProductID { get; set; }
[DataMember]
public string ProductName { get; set; }
[DataMember]
public string QuantityPerUnit { get; set; }
[DataMember]
public decimal UnitPrice { get; set; }
[DataMember]
public bool Discontinued { get; set; }
[DataMember]
public byte[] RowVersion { get; set; }
}
[DataContract]
public class ProductFault
{
public ProductFault(string msg)
{
FaultMessage = msg;
9
}
[DataMember]
public string FaultMessage;
}
}
In the context of the role of this WCF Service explain, ServiceContract,
OperationContract and DataContract.
(6 marks)
SOLUTION
Service Contract: Defines the kind of operations supported by the service, in this case the GetProduct
and UpdateProduct operations. It also exposes certain information to the client such as:
Data Types in the message.
Locations of the operations, or where the methods are defined
Protocol information and serialization format
Message exchange patterns (whether the behaviour of the message is either one-way, duplex or
request/reply)
Policy and Binding: Specify important information such as security, protocol.
Data Contract: Agreement between a service and a client on the data that has to be exchanged. Also
defines what data structures and parameters to be used by services in order to interact with the client.
In this code serialized Product objects can be sent between the service/client. These objects have
interface members (e.g. ProductID) that are visible to the client and can hence be called by the client.
Operation Contract: An operation contract is defined within a service contract. It defines the
parameters and return type of an operation. An operation contract can also defines operation-level
settings, like as the transaction flow of the operation, the directions of the operation (one-way, twoway, or both ways), and fault contract of the operation.
(d) Explain why it is beneficial to have a DataContact defined for ProductFault in the WCF
Interface code in part (b) of this question.
(6 marks)
SOLUTION
Exceptions are technology-specific and therefore are not suitable for crossing the service boundary of
SOA-compliant services. Thus, for WCF services, we should not throw normal exceptions. What we need
are SOAP faults that meet industry standards for seamless interoperability. Defining a DataContract for
ProductFault allows us to provide this interoperability.
The service interface layer operations that may throw FaultExceptions must be decorated with one or
more FaultContract attributes, defining the exact FaultException.
Above we have decorated the service operations GetProduct and UpdateProduct with the following
attribute: [FaultContract(typeof(ProductFault))]
This is to tell the service consumers that these operations may throw a fault of the type ProductFault,
which they will be able to interpret as it is defined in the interface.
Question 5
(a) Test Cases are a set of inputs with known expected outputs that are used to test the
functional correctness of the program. Outline the characteristics of a good test case
and comment on why good test cases are important.
(5 marks)
SOLUTION
Good test cases share several similarities which are:
have a clear purpose
focus on testing only a few aspects of the test subject
produce output which can be easily verified by the tester
give reproducible errors.
Remember, ultimately from the developer’s point of view, the purpose of testing is to discover which parts
needs to be fixed. Thus, not having a clear idea of which section is being tested, or having too many
aspects tested at the same time defeats the purpose. This is because if a test case fails, the developer
needs to be able to easily pinpoint the problem. It is also important to know the expected output of the
test case so the tester can verify that the test subject does not contain any logic errors.
10
(b) When designing a test suite it is important to bear in mind (i) test case independence
and (ii) test case coverage. Explain these two terms and describe how you would use
a Test Matrix to ensure good coverage.
(8 marks)
SOLUTION
Independence
Test case independence simply means that there should not be interaction between the test cases (i.e.
there shouldn’t be a test case which depends on other test case’s output). That is to say you should be
able to run the test cases in any order without having any impact on the overall evaluation of the test
suite.
Coverage
Coverage is a very broad term in testing as it can refer to many things. When a person refers to
coverage, usually they will refer to line-coverage, block-coverage, branch-coverage, or path-coverage.
However, the definition can be extended to functionality coverage as well when System testing and
Acceptance testing is concerned.
Using a Test Matrix for Coverage
A testing matrix consists of four elements: test-case column, test specifications row, checklist area, and
the tally row as shown along the bottom. The test-case column contains the different test cases that you
are using in that particular test-suite. The test specifications row contains the different test specifications
that your test subject is involved with.
The testing matrix can be used to both, document the functionality tested by a test case, or to design a
test case based on the functionality you want to test.
To document the functionality tested, you can list down the test cases in the test suite in the first column.
Subsequently, mark the columns corresponding to the test specifications that are tested for each test
case. This will allow you to formally document your level of coverage.
(e) Using Test Driven Development (TDD), all test cases are automated with the help of a
Unit Testing Framework (UTF). Describe the TDD style of software development.
(4 marks)
SOLUTION
TDD is a style of development where:
- An exhaustive suite of tests are maintained
- No functional code (i.e., the code that implements functions of the software) is written
unless it
has associated tests
- The tests are written first
- The tests determines what the functional code should do
TDD uses a “test first” approach in which test cases are written before code is written. These test cases
are written one-at-a-time and followed immediately by the generation of code required to get the test
case to pass. Software development becomes a series of very short iterations in which test cases
drive the creation of software and ultimately the design of the program.
(f) Visual Studio supports the concept of Test Driven Development(TDD) through its Unit
Testing Framework (UTF).
Here is a piece of code that has been developed as part of such testing.
[TestMethod]
public void AddTest ()
{
var system = new BasicMathLibrary;
int expected = 42;
int actual system.Add(40, 2);
Assert.AreEqual(expected, actual, "The expected value did not match the
11
actual value");
}
What is the significance of the “ [TestMethod]” attribute?
Explain the functionality of the “Assert.AreEqual(…);” statement.
Also comment on how you would stub out the Add(…) method to get the code to
compile and run without adding actual code. What would result when you would run
such stubbed out code?
(8 marks)
SOLUTION
Above the AddTest() method is a [TestMethod] attribute. This attribute allows the testing framework to
identify this method as a potential test that needs to be run.
The Assert.AreEqual (…) method is used to check whether or not the expected value matches the actual
value returned from the Add method.
However, if I build the solution, the build will fail (obviously) because I haven't created the
BasicMathLibrary class or the Add method
You create the BasicMathLibrary class and the Add method, but instead of adding actual functionality to
the Add method, you throw a NotImplementedException exception. This allows you to quickly stub out
methods without adding actual execution logic. When the stubbed out Add method executes the unit test
fails because the method or operation isn't implemented, which is what you would expect.
Question 6
(a) Outline the Risks and Benefits associated with Cloud Computing.
SOLUTION
(6 marks)
Outsourcing to cloud providers:
Commercial cloud computing effectively outsources portions of the IT stack, ranging from hardware
through applications, to cloud providers. Cloud computing allows a consumer to benefit by incrementally
leveraging (i.e. using) a more significant capital investment made by a provider. The consumers also
benefit significantly by being able to dynamically scale their demand of the cloud services.
Dependence on the network:
Cloud computing is fundamentally dependent on the network to connect the cloud with the consumer.
For those who have redundant network connections with robust bandwidth this will not be an issue, but
for those who don’t, consideration should be given concerning singular dependence on network based
offerings, and how business continues when the network is unavailable or unreliable. Poorly performing
networks can make a large impact on the availability of services to the consumer.
Dependence on specific cloud providers (lock-in):
Vendor lock-in is a risk with the current maturity of cloud computing.
Vendor neutrality is often best achieved by utilizing industry or open standards.
Developing applications to leverage one cloud provider’s offerings can lead to lock-in with one vendor’s
solution.
Provider costs:
Creating a generic reusable software component for a broad audience takes more, resources (20
percent to 100 percent more) than creating a less generic solution. The cost of reuse, therefore, shifts to
the service providers, which benefits the consumers.
Contracts and service-level agreements (SLAs):
Cloud offerings are defined with a discrete interface and performance expectation.
This agreement can be captured in an SLA between the provider and consumer, and this document can
be made a part of the contractual relationship between the two.
(b) Cloud computing and Service Oriented Architecture (SOA) have important overlapping
concerns and common considerations, discuss.
(7 marks)
SOLUTION
The most important overlap occurs near the top of the cloud computing stack, in the area of Cloud
Services, which are network accessible application components and software services, such as Web
Services. Both cloud computing and SOA share concepts of service orientation. Services of many types
are available on a common network for use by consumers. Cloud computing focuses on turning aspects
of the IT computing stack into commodities that can be purchased incrementally from the cloud based
providers and can be considered a type of out sourcing in many cases. For example, large-scale online
12
storage can be procured and automatically allocated in terabyte units from the cloud. However, cloud
computing is currently a broader term than SOA and covers the entire stack from hardware through the
presentation layer software systems. SOA, though not restricted conceptually to software, is often
implemented in practice as components or software services, as exemplified by the Web Service
standards used in many implementations. These components can be tied together and executed on
many platforms across the network to provide a business function.
While there are important overlaps between cloud computing and SOA, they have a different emphasis,
resulting from their original focus on different problem sets. SOA implementations are fundamentally
enterprise integration technologies for exchanging information between systems of systems. SOA
focuses on the problem of making systems integration more efficient, and if systems integration as a
trend continues to increase as described, efficiency in this task will become increasingly important. SOA
implementation technologies, such as the group of Web Service standards, allows a consumer software
application to invoke services across a common network. Further, they allow integration across a variety
of development languages and platforms, providing a language neutral software layer. A key benefit of
enterprise SOA efforts is the ability to make system-to-system interfaces consistent in the enterprise
architecture, thus saving resources on future integration and hopefully improving the speed at which
integration can occur.
The emphasis of cloud computing is to leverage the network to outsource IT functions across the entire
stack. While this can include software services as in an
SOA, it goes much further. Cloud computing allows the marketplace to offer many IT functions as
commodities, thus lowering the cost to consumers when compared to operating them internally.
Therefore, while the two concepts share many common characteristics, they are not synonymous and
can be pursued either independently or as concurrent activities. So Cloud computing does not replace
SOA, as the need to support broader and more consistent integration of systems will continue. The trend
by leadership teams to consider IT capabilities as a commodity will continue to put downward pressure
on IT budgets and, consequently, integration and data exchange will have to get more streamlined and
efficient, across a portfolio of disparate systems.
Cloud computing and SOA are not synonymous, though they share many characteristics. Solving one
does not complete the other. For example, consistently integrating your software systems as distributed
components or services (SOA) will not inherently virtualize your hardware, or outsource your presentation
layer to a third party provider (cloud computing). Accomplishing successful outsourcing of commodity IT
functions (cloud computing) does not integrate systems custom to your business, or aggregate data into
a single display “mash-up” (SOA). While SOA and cloud computing share many of the same concerns,
considering all the layers of the IT support stack will require coordinating multiple dependent efforts. In
summary, both cloud computing and SOA can support good engineering practices by enabling
fundamental concepts such as abstraction, loose coupling, and encapsulation. Both approaches rely on
the definition of clear and unambiguous interfaces, predictable performance and behaviour, interface
standards selection, and clear separations of functionality. So, cloud computing and SOA can be pursued
independently, or concurrently as complementary activities.
(c) Consider a scenario where there are a chain of photo processing stores that make use
of the cloud service to render or reformat digital media files. The photo processing
chain has a number of stores spread across the country, and wishes to centralize large
image and video processing to reduce two aspects of the system: the amount of
hardware in each store; and the complexity of maintaining and supporting the
hardware. Outline how this activity might be migrated to the cloud.
(6 marks)
SOLUTION
When a customer comes into a store with a video that needs to be converted to a different format, the
video file is first uploaded to a cloud storage service, and then a message is placed in a cloud queue
service that a file is on the cloud storage platform and needs to be converted to a different format. An
application controller that is running computer instances receives the message from the queue, and then
either uses an existing instance of a virtual machine, or creates a new instance, to handle the
reformatting of the video. As soon as this process is complete, the controller places a message in the
queue to notify the store that the project is complete. Additionally the preceding scenario could easily be
converted to a fully online experience, so that customers could upload files for processing without having
to go to a physical location.
13
(d) Explain what is meant by Virtualization and outline the benefits of virtualization.
(6 marks)
SOLUTION
Virtualization involves abstracting the hardware to run virtual instances of multiple guest operating
systems on a single host operating system. You can see Virtualization in action by installing Microsoft
Virtual PC, VMware Player or Sun VirtualBox. These are desktop virtualization solutions that let you
install and run an OS within the host OS. The virtualized guest OS images are called Virtual Machines .
However the benefits of virtualization are realized more on servers than on desktops as there are many
more reasons for running Virtualization on servers running in a traditional data centre. Some of these
reasons are:
Mean Time To Restore
It is far more flexible and faster to restore a failed web server, app server or a database server that is
running as a virtualized instance. Since these instances are physical files on the hard disk for the
host operating system, just copying over a replica of the failed server image is faster than restoring a
failed physical server. Administrators can maintain multiple versions of the VMs that come handy
during the restoration. A major benefit is that the whole copy and restore process can be automated
as a part of disaster recovery plan.
Maximizing the server utilization
It is very common that certain servers in the data centre are less utilized while some servers are
maxed out. Through virtualization, the load can be more evenly spread across all the servers. There
are management software offerings that will automatically move VMs to idle servers to dynamically
manage the load across the data centre.
Reduction in maintenance cost
Virtualization has a direct impact on cost. First, by consolidating the data centre to run on fewer but
powerful servers, there is a significant cost reduction. The power consumed by the data centre and
the maintenance cost of the cooling equipment comes down drastically. The other problem that
virtualization solves is the migration of servers. When the hardware reaches the end of the lifecycle,
the physical servers need to be replaced. Backing up and restoring the data and the installation of
software on a production server is very complex and expensive. Virtualization makes this process
simple and cost effective. The physical servers will be replaced and the VMs just get restarted
without any change in the configuration.
Efficient management
All major virtualization software have a centralized console to manage, maintain, track and monitor
the health of physical servers and the VMs running on these servers. Because of the simplicity and
the dynamic capabilities, IT administrators will spend less time in managing the infrastructure. This
results in better management and cost savings for the company.
END
14
© Copyright 2026 Paperzz