A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
A Metric for Assessing Component Balance
of Software Architectures
Eric Bouwers
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
José Pedro Correia
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
Arie van Deursen
⇤
Delft University of Technology
Delft, The Netherlands
[email protected]
Joost Visser
Software Improvement Group
Amsterdam, The Netherlands
[email protected]
ABSTRACT
The decomposition of a software system into components is
a major decision in a software architecture, having a strong
influence on many of its quality aspects. A system’s ana-
lyzability, in particular, is influenced by its decomposition
into components. But into how many components should
a system be decomposed? And how should the elements of
the system be distributed over those components?
In this paper, we set out to find an answer to these ques-
tions by capturing them jointly inside a metric called Com-
ponent Balance. We calibrate this generic metric with the
help of a repository of industrial and open source systems.
We report on an empirical study that demonstrate that the
metric is strongly correlated with ratings given by experts.
In a case study we show that the metric provides relevant
results in various evaluation scenarios.
Categories and Subject Descriptors
D.2.8 [
Software Engineering
]: Metrics; D.2.11 [
Software
Engineering
]: Software Architectures
General Terms
Measurement
Keywords
Maintainability, analyzability, software architecture evalua-
tion
1. INTRODUCTION
⇤Work partially done while at the Computer Human Interac-
tion and Software Engineering Lab (CHISEL), Department
of Computer Science, University of Victoria, Canada.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Submitted to ICSE’11
Honolulu, Hawaii, USA
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [20]. Choosing the
right architecture for a system is important, since “Archi-
tectures allow or preclude nearly all of the system’s quality
attributes” [10]. Fortunately, there is a wide range of soft-
ware architecture evaluation methods available to assist in
choosing an initial architecture (for overviews see [2, 12]).
After this initial choice it is important to regularly evalu-
ate whether the architecture of the software system is still in
line with the requirements of the stakeholders [27]. However,
a complete re-evaluation of a software architecture involves
the interaction of di↵erent stakeholders and experts, which
makes this a time-consuming and expensive process. Per-
forming such an evaluation on a weekly or monthly basis
is therefore not feasible, even though this type of recurring
evaluation helps in detecting problems as early as possible.
To reduce the cost of evaluations, software metrics can
be used. If these metrics approximate aspects of architec-
tural quality, their continuous monitoring can assist in de-
termining whether the quality is deviating (too much) from
the desired course. The work on software metrics for soft-
ware architectures has traditionally been focussed on the
way components depend on each-other, and how compo-
nents are internally structured (respectively coupling and
cohesion [26, 29]). Nevertheless, only focussing on these
type of metrics provides a limited view on the quality of a
software architecture.
For example, the dependencies between components does
not fully capture all of the four sub-characteristics of main-
tainability as defined by the ISO/9126 [18] standard for soft-
ware quality. In particular, dependencies between compo-
nents only partly influence the sub-characteristic of analyz-
ability, which is defined as “the capability of the software
product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identi-
fied” [18]. To get a broader perspective on this quality at-
tribute, coupling and cohesion metrics should be augmented
with metrics which capture whether the components of a
system provide enough discriminative power, without over-
whelming a software engineer with too many choices.
In this paper, we propose a new metric called Component
Balance to quantify whether a system is decomposed into
a reasonable number of balanced components. To evaluate
✖
A Cognitive Model for Software Architecture Complexity
Eric Bouwers ‡, Joost Visser
Software Improvement Group
Amsterdam, The Netherlands,
Email: {e.bouwers, j.visser}@sig.nl
Carola Lilienthal
C1 WPS GmbH / University of Hamburg
Hamburg, Germany
Email:
[email protected]
Arie van Deursen‡
‡ Delft University of Technology
Delft, The Netherlands
Email:
[email protected]
Abstract—This paper introduces a Software Architecture
Complexity Model (SACM) based on theories from cognitive
science and system attributes that have proven to be indicators
of maintainability in practice.
SACM can serve as a formal model to reason about why
certain attributes influence the complexity of an implemented
architecture. Also, SACM can be used as a starting point in
existing architecture evaluation methods such as the ATAM.
Alternatively, SACM can be used in a stand-alone fashion to
reason about a software architecture’s complexity.
Keywords-Software Architecture Evaluation, Software Archi-
tecture, Complexity, Cognitive models
I. I N T RO D U C T I O N
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [1]. The importance
of having a high quality software architecture is well un-
derstood [2], and confirmed by the wide range of available
architecture evaluation methods (for overviews see [3, 4]).
However, many architecture evaluation method do not
define a clear notion of “quality”. Because of this, the
process of evaluating an architecture usually includes the
definition of such a quality model, which makes the initial
investment to start performing architecture evaluations rather
high. This has been cited as one of the reasons for the low
adoption of architecture evaluations in industry [5].
To counter this lack of adoption, we recently introduced
LiSCIA, a Light-weight Sanity Check for Implemented
Architectures [6]. LiSCIA comes with a set of questions
which together form an implicit, informal quality model.
However, the method lacks a formal model to explain why
certain system attributes influence the maintainability of the
implemented architecture.
A formal model that does provide this type of explanation
has been introduced by Lilienthal [7]. This architecture com-
plexity model is founded on theories in the field of cognitive
science and on general software engineering principles. The
model has been successfully applied in several case studies.
However, due to the design of the model it can only explain
the complexity of an architecture from the perspective of
an individual developer. In addition, the model does not
explain all system attributes that experts usually use during
the evaluation of an implemented architecture.
The main contribution of this paper is the definition
of the Software Architecture Complexity Model (SACM).
SACM is a formal model to reason about 1) why an
implemented software architecture is difficult to understand,
and 2) which elements complicate the verification of the
implemented architecture against the intended architecture.
SACM extends the architecture complexity model of Lilien-
thal by taking into account the environment in which a
developer has to understand an architecture. In addition,
SACM can serve as a formal model to support LiSCIA.
II. B A C K G RO U N D
Over the years, several proposals have been made to define
the complexity of an architecture in the form of metrics
(for an overview see, for example, [7]). Unfortunately, these
contributions usually provide insight into the complexity of
a single or a small set of attributes. In order to provide a
insight into the complexity of the software architecture as a
whole, a model which explains the relationship among the
separate metrics is needed.
One way to provide a framework which can express the
relationship among metrics is to define a factor-criteria-
metric-model (FCM-model) [8]. An FCM-model aims to
operationalize an overall goal by reducing it to several
factors. These factors are still abstract terms and need to
be substantiated by a layer of criteria. The lowest level in
a FCM-model consists of metrics that are derived from the
criteria.
An existing FCM-model for architecture complexity can
be found in the work of Lilienthal [7]. The complexity model
of Lilienthal (CML) describes three factors of architecture
complexity. These factors are based upon a combination
of theories from cognitive science and general software
engineering principles. Each of these factors is translated
into a set of criteria, which are in turn evaluated using
questionnaires and metrics [7, 9].
The case studies used to evaluate CML involved systems
implemented in only a single technology. Because of this,
the complexity of using different programming languages
inside a single system is not taken into account. Unfortu-
nately, this is one of the fifteen system attributes that is used
by experts to evaluate an implemented architecture [10]. A
closer examination of how CML can be mapped onto these
A Cognitive Model for Software Architecture Complexity
Eric Bouwers ‡, Joost Visser
Software Improvement Group
Amsterdam, The Netherlands,
Email: {e.bouwers, j.visser}@sig.nl
Carola Lilienthal
C1 WPS GmbH / University of Hamburg
Hamburg, Germany
Email:
[email protected]
Arie van Deursen‡
‡ Delft University of Technology
Delft, The Netherlands
Email:
[email protected]
Abstract—This paper introduces a Software Architecture
Complexity Model (SACM) based on theories from cognitive
science and system attributes that have proven to be indicators
of maintainability in practice.
SACM can serve as a formal model to reason about why
certain attributes influence the complexity of an implemented
architecture. Also, SACM can be used as a starting point in
existing architecture evaluation methods such as the ATAM.
Alternatively, SACM can be used in a stand-alone fashion to
reason about a software architecture’s complexity.
Keywords-Software Architecture Evaluation, Software Archi-
tecture, Complexity, Cognitive models
I. I N T RO D U C T I O N
Software architecture is loosely defined as the organiza-
tional structure of a software system including components,
connections, constraints, and rationale [1]. The importance
of having a high quality software architecture is well un-
derstood [2], and confirmed by the wide range of available
architecture evaluation methods (for overviews see [3, 4]).
However, many architecture evaluation method do not
define a clear notion of “quality”. Because of this, the
process of evaluating an architecture usually includes the
definition of such a quality model, which makes the initial
investment to start performing architecture evaluations rather
high. This has been cited as one of the reasons for the low
adoption of architecture evaluations in industry [5].
To counter this lack of adoption, we recently introduced
LiSCIA, a Light-weight Sanity Check for Implemented
Architectures [6]. LiSCIA comes with a set of questions
which together form an implicit, informal quality model.
However, the method lacks a formal model to explain why
certain system attributes influence the maintainability of the
implemented architecture.
A formal model that does provide this type of explanation
has been introduced by Lilienthal [7]. This architecture com-
plexity model is founded on theories in the field of cognitive
science and on general software engineering principles. The
model has been successfully applied in several case studies.
However, due to the design of the model it can only explain
the complexity of an architecture from the perspective of
an individual developer. In addition, the model does not
explain all system attributes that experts usually use during
the evaluation of an implemented architecture.
The main contribution of this paper is the definition
of the Software Architecture Complexity Model (SACM).
SACM is a formal model to reason about 1) why an
implemented software architecture is difficult to understand,
and 2) which elements complicate the verification of the
implemented architecture against the intended architecture.
SACM extends the architecture complexity model of Lilien-
thal by taking into account the environment in which a
developer has to understand an architecture. In addition,
SACM can serve as a formal model to support LiSCIA.
II. B A C K G RO U N D
Over the years, several proposals have been made to define
the complexity of an architecture in the form of metrics
(for an overview see, for example, [7]). Unfortunately, these
contributions usually provide insight into the complexity of
a single or a small set of attributes. In order to provide a
insight into the complexity of the software architecture as a
whole, a model which explains the relationship among the
separate metrics is needed.
One way to provide a framework which can express the
relationship among metrics is to define a factor-criteria-
metric-model (FCM-model) [8]. An FCM-model aims to
operationalize an overall goal by reducing it to several
factors. These factors are still abstract terms and need to
be substantiated by a layer of criteria. The lowest level in
a FCM-model consists of metrics that are derived from the
criteria.
An existing FCM-model for architecture complexity can
be found in the work of Lilienthal [7]. The complexity model
of Lilienthal (CML) describes three factors of architecture
complexity. These factors are based upon a combination
of theories from cognitive science and general software
engineering principles. Each of these factors is translated
into a set of criteria, which are in turn evaluated using
questionnaires and metrics [7, 9].
The case studies used to evaluate CML involved systems
implemented in only a single technology. Because of this,
the complexity of using different programming languages
inside a single system is not taken into account. Unfortu-
nately, this is one of the fifteen system attributes that is used
by experts to evaluate an implemented architecture [10]. A
closer examination of how CML can be mapped onto these
✔