The Scientific Programmer - Eric Bouwers

The Scientific Programmer - Eric Bouwers

A846fc46522b396026adcb62e162b7dc?s=128

Joy of Coding

March 07, 2014
Tweet

Transcript

  1. The Scientific Programmer “How do you know that you know?”

    @EricBouwers
  2. What do you want to improve in your development process?

  3. Interruptions!

  4. So what do you do?

  5. Rubber duck debugging

  6. None
  7. Source:http://en.wikipedia.org/wiki/File:Bundesarchiv_B_145_Bild-F031434-0006,_Aachen,_Technische_Hochschule,_Rechenzentrum.jpg

  8. None
  9. They simply don’t believe!

  10. None
  11. None
  12. Research: “The systematic investigation into and study of materials and

    sources in order to establish facts and reach new conclusions.”
  13. Researchers Source:http://en.wikipedia.org/wiki/File:Solvay_conference_1927.jpg

  14. Source:http://upload.wikimedia.org/wikipedia/commons/e/ea/University_of_Bradford_school_of_management.jpg

  15. Source:http://commons.wikimedia.org/wiki/File:Views_of_the_LHC_tunnel_sector_3-4,_tirage_1.jpg

  16. None
  17. “Developing a deep understanding of how people build and evolve

    software systems.”
  18. “Developing novel methods, techniques and tools that advance the way

    in which software is built and modified.”
  19. Knowledge problems Practical problems Roel Wieringa; Design Science as Nested

    Problem Solving. DERIST 2009
  20. How do researchers do this?

  21. Identify a problem Gather information about solution Propose solution Evaluate

    (with data) Reflect on evaluation Peer review Publish
  22. None
  23. A Metric for Assessing Component Balance of Software Architectures Eric

    Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate A Metric for Assessing Component Balance of Software Architectures Eric Bouwers Software Improvement Group Amsterdam, The Netherlands e.bouwers@sig.eu José Pedro Correia Software Improvement Group Amsterdam, The Netherlands j.p.correia@sig.eu Arie van Deursen ⇤ Delft University of Technology Delft, The Netherlands Arie.vanDeursen@tudelft.nl Joost Visser Software Improvement Group Amsterdam, The Netherlands j.visser@sig.eu ABSTRACT The decomposition of a software system into components is a major decision in a software architecture, having a strong influence on many of its quality aspects. A system’s ana- lyzability, in particular, is influenced by its decomposition into components. But into how many components should a system be decomposed? And how should the elements of the system be distributed over those components? In this paper, we set out to find an answer to these ques- tions by capturing them jointly inside a metric called Com- ponent Balance. We calibrate this generic metric with the help of a repository of industrial and open source systems. We report on an empirical study that demonstrate that the metric is strongly correlated with ratings given by experts. In a case study we show that the metric provides relevant results in various evaluation scenarios. Categories and Subject Descriptors D.2.8 [ Software Engineering ]: Metrics; D.2.11 [ Software Engineering ]: Software Architectures General Terms Measurement Keywords Maintainability, analyzability, software architecture evalua- tion 1. INTRODUCTION ⇤Work partially done while at the Computer Human Interac- tion and Software Engineering Lab (CHISEL), Department of Computer Science, University of Victoria, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Submitted to ICSE’11 Honolulu, Hawaii, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00. Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [20]. Choosing the right architecture for a system is important, since “Archi- tectures allow or preclude nearly all of the system’s quality attributes” [10]. Fortunately, there is a wide range of soft- ware architecture evaluation methods available to assist in choosing an initial architecture (for overviews see [2, 12]). After this initial choice it is important to regularly evalu- ate whether the architecture of the software system is still in line with the requirements of the stakeholders [27]. However, a complete re-evaluation of a software architecture involves the interaction of di↵erent stakeholders and experts, which makes this a time-consuming and expensive process. Per- forming such an evaluation on a weekly or monthly basis is therefore not feasible, even though this type of recurring evaluation helps in detecting problems as early as possible. To reduce the cost of evaluations, software metrics can be used. If these metrics approximate aspects of architec- tural quality, their continuous monitoring can assist in de- termining whether the quality is deviating (too much) from the desired course. The work on software metrics for soft- ware architectures has traditionally been focussed on the way components depend on each-other, and how compo- nents are internally structured (respectively coupling and cohesion [26, 29]). Nevertheless, only focussing on these type of metrics provides a limited view on the quality of a software architecture. For example, the dependencies between components does not fully capture all of the four sub-characteristics of main- tainability as defined by the ISO/9126 [18] standard for soft- ware quality. In particular, dependencies between compo- nents only partly influence the sub-characteristic of analyz- ability, which is defined as “the capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identi- fied” [18]. To get a broader perspective on this quality at- tribute, coupling and cohesion metrics should be augmented with metrics which capture whether the components of a system provide enough discriminative power, without over- whelming a software engineer with too many choices. In this paper, we propose a new metric called Component Balance to quantify whether a system is decomposed into a reasonable number of balanced components. To evaluate ✖ A Cognitive Model for Software Architecture Complexity Eric Bouwers ‡, Joost Visser Software Improvement Group Amsterdam, The Netherlands, Email: {e.bouwers, j.visser}@sig.nl Carola Lilienthal C1 WPS GmbH / University of Hamburg Hamburg, Germany Email: carola.lilienthal@c1-wps.de Arie van Deursen‡ ‡ Delft University of Technology Delft, The Netherlands Email: Arie.vanDeursen@tudelft.nl Abstract—This paper introduces a Software Architecture Complexity Model (SACM) based on theories from cognitive science and system attributes that have proven to be indicators of maintainability in practice. SACM can serve as a formal model to reason about why certain attributes influence the complexity of an implemented architecture. Also, SACM can be used as a starting point in existing architecture evaluation methods such as the ATAM. Alternatively, SACM can be used in a stand-alone fashion to reason about a software architecture’s complexity. Keywords-Software Architecture Evaluation, Software Archi- tecture, Complexity, Cognitive models I. I N T RO D U C T I O N Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [1]. The importance of having a high quality software architecture is well un- derstood [2], and confirmed by the wide range of available architecture evaluation methods (for overviews see [3, 4]). However, many architecture evaluation method do not define a clear notion of “quality”. Because of this, the process of evaluating an architecture usually includes the definition of such a quality model, which makes the initial investment to start performing architecture evaluations rather high. This has been cited as one of the reasons for the low adoption of architecture evaluations in industry [5]. To counter this lack of adoption, we recently introduced LiSCIA, a Light-weight Sanity Check for Implemented Architectures [6]. LiSCIA comes with a set of questions which together form an implicit, informal quality model. However, the method lacks a formal model to explain why certain system attributes influence the maintainability of the implemented architecture. A formal model that does provide this type of explanation has been introduced by Lilienthal [7]. This architecture com- plexity model is founded on theories in the field of cognitive science and on general software engineering principles. The model has been successfully applied in several case studies. However, due to the design of the model it can only explain the complexity of an architecture from the perspective of an individual developer. In addition, the model does not explain all system attributes that experts usually use during the evaluation of an implemented architecture. The main contribution of this paper is the definition of the Software Architecture Complexity Model (SACM). SACM is a formal model to reason about 1) why an implemented software architecture is difficult to understand, and 2) which elements complicate the verification of the implemented architecture against the intended architecture. SACM extends the architecture complexity model of Lilien- thal by taking into account the environment in which a developer has to understand an architecture. In addition, SACM can serve as a formal model to support LiSCIA. II. B A C K G RO U N D Over the years, several proposals have been made to define the complexity of an architecture in the form of metrics (for an overview see, for example, [7]). Unfortunately, these contributions usually provide insight into the complexity of a single or a small set of attributes. In order to provide a insight into the complexity of the software architecture as a whole, a model which explains the relationship among the separate metrics is needed. One way to provide a framework which can express the relationship among metrics is to define a factor-criteria- metric-model (FCM-model) [8]. An FCM-model aims to operationalize an overall goal by reducing it to several factors. These factors are still abstract terms and need to be substantiated by a layer of criteria. The lowest level in a FCM-model consists of metrics that are derived from the criteria. An existing FCM-model for architecture complexity can be found in the work of Lilienthal [7]. The complexity model of Lilienthal (CML) describes three factors of architecture complexity. These factors are based upon a combination of theories from cognitive science and general software engineering principles. Each of these factors is translated into a set of criteria, which are in turn evaluated using questionnaires and metrics [7, 9]. The case studies used to evaluate CML involved systems implemented in only a single technology. Because of this, the complexity of using different programming languages inside a single system is not taken into account. Unfortu- nately, this is one of the fifteen system attributes that is used by experts to evaluate an implemented architecture [10]. A closer examination of how CML can be mapped onto these A Cognitive Model for Software Architecture Complexity Eric Bouwers ‡, Joost Visser Software Improvement Group Amsterdam, The Netherlands, Email: {e.bouwers, j.visser}@sig.nl Carola Lilienthal C1 WPS GmbH / University of Hamburg Hamburg, Germany Email: carola.lilienthal@c1-wps.de Arie van Deursen‡ ‡ Delft University of Technology Delft, The Netherlands Email: Arie.vanDeursen@tudelft.nl Abstract—This paper introduces a Software Architecture Complexity Model (SACM) based on theories from cognitive science and system attributes that have proven to be indicators of maintainability in practice. SACM can serve as a formal model to reason about why certain attributes influence the complexity of an implemented architecture. Also, SACM can be used as a starting point in existing architecture evaluation methods such as the ATAM. Alternatively, SACM can be used in a stand-alone fashion to reason about a software architecture’s complexity. Keywords-Software Architecture Evaluation, Software Archi- tecture, Complexity, Cognitive models I. I N T RO D U C T I O N Software architecture is loosely defined as the organiza- tional structure of a software system including components, connections, constraints, and rationale [1]. The importance of having a high quality software architecture is well un- derstood [2], and confirmed by the wide range of available architecture evaluation methods (for overviews see [3, 4]). However, many architecture evaluation method do not define a clear notion of “quality”. Because of this, the process of evaluating an architecture usually includes the definition of such a quality model, which makes the initial investment to start performing architecture evaluations rather high. This has been cited as one of the reasons for the low adoption of architecture evaluations in industry [5]. To counter this lack of adoption, we recently introduced LiSCIA, a Light-weight Sanity Check for Implemented Architectures [6]. LiSCIA comes with a set of questions which together form an implicit, informal quality model. However, the method lacks a formal model to explain why certain system attributes influence the maintainability of the implemented architecture. A formal model that does provide this type of explanation has been introduced by Lilienthal [7]. This architecture com- plexity model is founded on theories in the field of cognitive science and on general software engineering principles. The model has been successfully applied in several case studies. However, due to the design of the model it can only explain the complexity of an architecture from the perspective of an individual developer. In addition, the model does not explain all system attributes that experts usually use during the evaluation of an implemented architecture. The main contribution of this paper is the definition of the Software Architecture Complexity Model (SACM). SACM is a formal model to reason about 1) why an implemented software architecture is difficult to understand, and 2) which elements complicate the verification of the implemented architecture against the intended architecture. SACM extends the architecture complexity model of Lilien- thal by taking into account the environment in which a developer has to understand an architecture. In addition, SACM can serve as a formal model to support LiSCIA. II. B A C K G RO U N D Over the years, several proposals have been made to define the complexity of an architecture in the form of metrics (for an overview see, for example, [7]). Unfortunately, these contributions usually provide insight into the complexity of a single or a small set of attributes. In order to provide a insight into the complexity of the software architecture as a whole, a model which explains the relationship among the separate metrics is needed. One way to provide a framework which can express the relationship among metrics is to define a factor-criteria- metric-model (FCM-model) [8]. An FCM-model aims to operationalize an overall goal by reducing it to several factors. These factors are still abstract terms and need to be substantiated by a layer of criteria. The lowest level in a FCM-model consists of metrics that are derived from the criteria. An existing FCM-model for architecture complexity can be found in the work of Lilienthal [7]. The complexity model of Lilienthal (CML) describes three factors of architecture complexity. These factors are based upon a combination of theories from cognitive science and general software engineering principles. Each of these factors is translated into a set of criteria, which are in turn evaluated using questionnaires and metrics [7, 9]. The case studies used to evaluate CML involved systems implemented in only a single technology. Because of this, the complexity of using different programming languages inside a single system is not taken into account. Unfortu- nately, this is one of the fifteen system attributes that is used by experts to evaluate an implemented architecture [10]. A closer examination of how CML can be mapped onto these ✔
  24. Why should we believe them? Show Evaluate Review

  25. Why don’t we know about it?

  26. Rubber duck debugging

  27. None
  28. None
  29. “Introducing rubber ducking leads to fewer interruptions”

  30. Are we being interrupted?

  31. None
  32. Without duck With duck 4 6

  33. Construct Internal External Conclusion Threats to Validity

  34. A better design

  35. Alice* Bob* Charlie Debbie Monday Tuesday Wednesday Thursday Friday Talked

    to the duck worked Talked to the duck didn’t work Monday Tuesday Wednesday Thursday Friday
  36. Without duck With duck Alice Bob Charlie Debbie

  37. “I felt really weird talking to the duck, it was

    smiling to me all the time!”
  38. “I took the duck home and lost it to my

    child, so I just talked to my mouse”
  39. “Which ducks? I did not see any ducks. I finally

    managed to fix this issue that has been bugging me for days!”
  40. Report

  41. Source:http://en.wikipedia.org/wiki/File:Bundesarchiv_B_145_Bild-F031434-0006,_Aachen,_Technische_Hochschule,_Rechenzentrum.jpg

  42. None
  43. Identify a problem Gather information about solution Propose solution Evaluate

    (with data) Reflect on evaluation Peer review Publish
  44. None
  45. eric@sig.eu @EricBouwers Reflect on others Formulate a claim Collect data

    (repeatable) Reflect on evidence Share your results Buy a rubber duck!