Slide 1

Slide 1 text

The Scientific Programmer “How do you know that you know?” @EricBouwers

Slide 2

Slide 2 text

What do you want to improve in your development process?

Slide 3

Slide 3 text

Interruptions!

Slide 4

Slide 4 text

So what do you do?

Slide 5

Slide 5 text

Rubber duck debugging

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Source:http://en.wikipedia.org/wiki/File:Bundesarchiv_B_145_Bild-F031434-0006,_Aachen,_Technische_Hochschule,_Rechenzentrum.jpg

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

They simply don’t believe!

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

Research: “The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.”

Slide 13

Slide 13 text

Researchers Source:http://en.wikipedia.org/wiki/File:Solvay_conference_1927.jpg

Slide 14

Slide 14 text

Source:http://upload.wikimedia.org/wikipedia/commons/e/ea/University_of_Bradford_school_of_management.jpg

Slide 15

Slide 15 text

Source:http://commons.wikimedia.org/wiki/File:Views_of_the_LHC_tunnel_sector_3-4,_tirage_1.jpg

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

“Developing a deep understanding of how people build and evolve software systems.”

Slide 18

Slide 18 text

“Developing novel methods, techniques and tools that advance the way in which software is built and modified.”

Slide 19

Slide 19 text

Knowledge problems Practical problems Roel Wieringa; Design Science as Nested Problem Solving. DERIST 2009

Slide 20

Slide 20 text

How do researchers do this?

Slide 21

Slide 21 text

Identify a problem Gather information about solution Propose solution Evaluate (with data) Reflect on evaluation Peer review Publish

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands [email protected] José Pedro Correia Software Improvement Group Amsterdam, The Netherlands [email protected] Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands [email protected] Joost Visser Software Improvement Group Amsterdam, The Netherlands [email protected] ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate ✖ A Cognitive Model for Software Architecture Complexity Eric Bouwers ‡, Joost Visser Software Improvement Group Amsterdam, The Netherlands, Email: {e.bouwers, j.visser}@sig.nl Carola Lilienthal C1 WPS GmbH / University of Hamburg Hamburg, Germany Email: [email protected] Arie van Deursen‡ ‡ Delft University of Technology Delft, The Netherlands Email: [email protected] Abstract—This paper introduces a Software Architecture Complexity Model (SACM) based on theories from cognitive science and system attributes that have proven to be indicators of maintainability in practice. SACM can serve as a formal model to reason about why certain attributes influence the complexity of an implemented architecture. Also, SACM can be used as a starting point in existing architecture evaluation methods such as the ATAM. Alternatively, SACM can be used in a stand-alone fashion to reason about a software architecture’s complexity. Keywords-Software Architecture Evaluation, Software Archi- tecture, Complexity, Cognitive models I. I N T RO D U C T I O N Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [1]. The importance of having a high quality software architecture is well un- derstood [2], and confirmed by the wide range of available architecture evaluation methods (for overviews see [3, 4]). However, many architecture evaluation method do not define a clear notion of “quality”. Because of this, the process of evaluating an architecture usually includes the definition of such a quality model, which makes the initial investment to start performing architecture evaluations rather high. This has been cited as one of the reasons for the low adoption of architecture evaluations in industry [5]. To counter this lack of adoption, we recently introduced LiSCIA, a Light-weight Sanity Check for Implemented Architectures [6]. LiSCIA comes with a set of questions which together form an implicit, informal quality model. However, the method lacks a formal model to explain why certain system attributes influence the maintainability of the implemented architecture. A formal model that does provide this type of explanation has been introduced by Lilienthal [7]. This architecture com- plexity model is founded on theories in the field of cognitive science and on general software engineering principles. The model has been successfully applied in several case studies. However, due to the design of the model it can only explain the complexity of an architecture from the perspective of an individual developer. In addition, the model does not explain all system attributes that experts usually use during the evaluation of an implemented architecture. The main contribution of this paper is the definition of the Software Architecture Complexity Model (SACM). SACM is a formal model to reason about 1) why an implemented software architecture is difficult to understand, and 2) which elements complicate the verification of the implemented architecture against the intended architecture. SACM extends the architecture complexity model of Lilien- thal by taking into account the environment in which a developer has to understand an architecture. In addition, SACM can serve as a formal model to support LiSCIA. II. B A C K G RO U N D Over the years, several proposals have been made to define the complexity of an architecture in the form of metrics (for an overview see, for example, [7]). Unfortunately, these contributions usually provide insight into the complexity of a single or a small set of attributes. In order to provide a insight into the complexity of the software architecture as a whole, a model which explains the relationship among the separate metrics is needed. One way to provide a framework which can express the relationship among metrics is to define a factor-criteria- metric-model (FCM-model) [8]. An FCM-model aims to operationalize an overall goal by reducing it to several factors. These factors are still abstract terms and need to be substantiated by a layer of criteria. The lowest level in a FCM-model consists of metrics that are derived from the criteria. An existing FCM-model for architecture complexity can be found in the work of Lilienthal [7]. The complexity model of Lilienthal (CML) describes three factors of architecture complexity. These factors are based upon a combination of theories from cognitive science and general software engineering principles. Each of these factors is translated into a set of criteria, which are in turn evaluated using questionnaires and metrics [7, 9]. The case studies used to evaluate CML involved systems implemented in only a single technology. Because of this, the complexity of using different programming languages inside a single system is not taken into account. Unfortu- nately, this is one of the fifteen system attributes that is used by experts to evaluate an implemented architecture [10]. A closer examination of how CML can be mapped onto these A Cognitive Model for Software Architecture Complexity Eric Bouwers ‡, Joost Visser Software Improvement Group Amsterdam, The Netherlands, Email: {e.bouwers, j.visser}@sig.nl Carola Lilienthal C1 WPS GmbH / University of Hamburg Hamburg, Germany Email: [email protected] Arie van Deursen‡ ‡ Delft University of Technology Delft, The Netherlands Email: [email protected] Abstract—This paper introduces a Software Architecture Complexity Model (SACM) based on theories from cognitive science and system attributes that have proven to be indicators of maintainability in practice. SACM can serve as a formal model to reason about why certain attributes influence the complexity of an implemented architecture. Also, SACM can be used as a starting point in existing architecture evaluation methods such as the ATAM. Alternatively, SACM can be used in a stand-alone fashion to reason about a software architecture’s complexity. Keywords-Software Architecture Evaluation, Software Archi- tecture, Complexity, Cognitive models I. I N T RO D U C T I O N Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [1]. The importance of having a high quality software architecture is well un- derstood [2], and confirmed by the wide range of available architecture evaluation methods (for overviews see [3, 4]). However, many architecture evaluation method do not define a clear notion of “quality”. Because of this, the process of evaluating an architecture usually includes the definition of such a quality model, which makes the initial investment to start performing architecture evaluations rather high. This has been cited as one of the reasons for the low adoption of architecture evaluations in industry [5]. To counter this lack of adoption, we recently introduced LiSCIA, a Light-weight Sanity Check for Implemented Architectures [6]. LiSCIA comes with a set of questions which together form an implicit, informal quality model. However, the method lacks a formal model to explain why certain system attributes influence the maintainability of the implemented architecture. A formal model that does provide this type of explanation has been introduced by Lilienthal [7]. This architecture com- plexity model is founded on theories in the field of cognitive science and on general software engineering principles. The model has been successfully applied in several case studies. However, due to the design of the model it can only explain the complexity of an architecture from the perspective of an individual developer. In addition, the model does not explain all system attributes that experts usually use during the evaluation of an implemented architecture. The main contribution of this paper is the definition of the Software Architecture Complexity Model (SACM). SACM is a formal model to reason about 1) why an implemented software architecture is difficult to understand, and 2) which elements complicate the verification of the implemented architecture against the intended architecture. SACM extends the architecture complexity model of Lilien- thal by taking into account the environment in which a developer has to understand an architecture. In addition, SACM can serve as a formal model to support LiSCIA. II. B A C K G RO U N D Over the years, several proposals have been made to define the complexity of an architecture in the form of metrics (for an overview see, for example, [7]). Unfortunately, these contributions usually provide insight into the complexity of a single or a small set of attributes. In order to provide a insight into the complexity of the software architecture as a whole, a model which explains the relationship among the separate metrics is needed. One way to provide a framework which can express the relationship among metrics is to define a factor-criteria- metric-model (FCM-model) [8]. An FCM-model aims to operationalize an overall goal by reducing it to several factors. These factors are still abstract terms and need to be substantiated by a layer of criteria. The lowest level in a FCM-model consists of metrics that are derived from the criteria. An existing FCM-model for architecture complexity can be found in the work of Lilienthal [7]. The complexity model of Lilienthal (CML) describes three factors of architecture complexity. These factors are based upon a combination of theories from cognitive science and general software engineering principles. Each of these factors is translated into a set of criteria, which are in turn evaluated using questionnaires and metrics [7, 9]. The case studies used to evaluate CML involved systems implemented in only a single technology. Because of this, the complexity of using different programming languages inside a single system is not taken into account. Unfortu- nately, this is one of the fifteen system attributes that is used by experts to evaluate an implemented architecture [10]. A closer examination of how CML can be mapped onto these ✔

Slide 24

Slide 24 text

Why should we believe them? Show Evaluate Review

Slide 25

Slide 25 text

Why don’t we know about it?

Slide 26

Slide 26 text

Rubber duck debugging

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

“Introducing rubber ducking leads to fewer interruptions”

Slide 30

Slide 30 text

Are we being interrupted?

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

Without duck With duck 4 6

Slide 33

Slide 33 text

Construct Internal External Conclusion Threats to Validity

Slide 34

Slide 34 text

A better design

Slide 35

Slide 35 text

Alice* Bob* Charlie Debbie Monday Tuesday Wednesday Thursday Friday Talked to the duck worked Talked to the duck didn’t work Monday Tuesday Wednesday Thursday Friday

Slide 36

Slide 36 text

Without duck With duck Alice Bob Charlie Debbie

Slide 37

Slide 37 text

“I felt really weird talking to the duck, it was smiling to me all the time!”

Slide 38

Slide 38 text

“I took the duck home and lost it to my child, so I just talked to my mouse”

Slide 39

Slide 39 text

“Which ducks? I did not see any ducks. I finally managed to fix this issue that has been bugging me for days!”

Slide 40

Slide 40 text

Report

Slide 41

Slide 41 text

Source:http://en.wikipedia.org/wiki/File:Bundesarchiv_B_145_Bild-F031434-0006,_Aachen,_Technische_Hochschule,_Rechenzentrum.jpg

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

Identify a problem Gather information about solution Propose solution Evaluate (with data) Reflect on evaluation Peer review Publish

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

[email protected] @EricBouwers Reflect on others Formulate a claim Collect data (repeatable) Reflect on evidence Share your results Buy a rubber duck!