Nasdanika Architecture provides models and tools for “architecture-as-code” development.
Nasdanika Architecture is inspired by and borrows from TOGAF and C4 Model.
Software architecture is the fundamental structure of a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.
Architecture development:
TODO: Before you invest your time into reading, take a look at the demo. Link to the demo, demo docs - how it is done. Short summary here. Composite.
.html
files are published with .aspx
extension.This section provides an overview of NASDAF architecture practice components.
NASDAF uses Eclipse Modeling Framework Ecore to define the meta-model. This section provides an overview of Ecore concepts.
patch()
and restart()
operations.Metamodel elements can be annotated with EAnnotation. Annotations can be used to associate documentation and constraints/validations with data elements. An example of a validation would be ensuring uniqueness of IP addresses across all servers.
An Ecore metamodel is used to generate Java classes representing metamodel EClasses. Compiled classes can be packaged as a Maven jar and deployed to a Maven repository, e.g. Maven Central. Model files can also be packaged and published as a Maven jar. Example - Nasdanika Ncore.
Ecore metamodel and generator model can be also loaded using Java API.
Nasdanika HTML Ecore can be used to generate metamodel documentation. Example - Nasdanika HTML Application Model.
The metamodel defines the architectural ubiquitous language and generated documentation is a dictionary of that language.
Metamodels can reference other metamodels. For example, the Application metamodel references Ncore metamodel. Application metamodel classes extend and use Ncore metamodel classes.
Metamodel files are XML files and can be diffed and merged which facilitates collaborative development.
Ecore code generator uses @generated
Javadoc tags to mark generated code. It does not overwrite code which is not marked as generated or if the generated tag is “dirty” - has some text after it, e.g. @generated NOT
. This feature allows generated and hand-crafted code to co-exist. For example:
NOT
after @generated
and provides operation implementation.See EMF Tutorial.
Another technique is to add Gen
suffix to the generated method and leverage generated code in hand-crafted code:
/**
* <!-- begin-user-doc -->
* <!-- end-user-doc -->
* @generated
*/
public String getNameGen() {
return name;
}
public String getName() {
return format(getNameGen());
}
At runtime ecore models are organized as follows:
One powerful feature of Ecore is object proxies. A proxy object can be created with an URI of the target object. EReferences may be configured to to transparently resolve proxies.
Resources can be loaded from different sources using org.eclipse.emf.ecore.resource.Resource.Factory. Resource factories can be associated with extensions or protocols.
org.eclipse.emf.ecore.resource.URIHandler can be used to load/save resources as streams.
Ecore provides org.eclipse.emf.ecore.xmi.impl.XMIResourceFactoryImpl for loading and saving resources in XMI format.
Models can be stored in a variety of databases using CDO and NeoEMF.
Nasdanika provides the following resource and resource factory classes:
Custom URI handlers can be created to load models from sources such as:
git:
scheme can be used to load resources using JGit, and gitlab:
to load using gitlab4j-api. This approach would allow to reference resources at a specific reference, e.g. a branch or tag, which is more flexible than git submodules. For REST API cases it would also allow to load resources on-demand instead of full clone.maven://<group id>/<artifact id>/<version>/<file>[/<path in jar>]
and loaded using Maven Artifact Resolver.Custom factories may load resources from, say:
In some cases adapter services can be created to load models from information systems and serve them over HTTP(S) as XMI, JSON, or YAML. In case of XMI an adapter may leverage classes generated from metamodels to programmatically populate a model and then save it to XMI.
Loading logic may retrieve data from multiple sources. For example:
profile.yml
located in the person’s personal Git repository is such repository and file exist.pom.xml
. Information about developers can be loaded as proxies with developer e-mails as proxy URI’s. The proxies would be resolved to person objects loaded from the organization’s directory. Person objects may have a computed (derived) opposite to “developers” reference of a software component model element. This would allow to list all software components for a given developer.As you can see, EMF allows to load data from multiple sources - databases, diagrams, YAML, JSON, Excel, information systems - cross-reference (stitch), validate, and represent as a coherent graph of model elements.
The data loading logic acts as an anti-corruption layer hiding complexities of data retrieval.
There is a number of ways to define viewpoints for EMF models:
Drawio diagrams can be used as model resources and as free-form model views (representations) which are not constrained by a viewpoint specification.
This section provides an overview of Drawio.
See Nadanika Core Drawio for more details.
Drawio diagrams can be created and edited using the following methods:
js/viewer-static.min.js
to point to the URL of your hosted editor - search and replace https://app.diagrams.net
with the URL of your installation.Drawio shapes and diagrams can be added to a user library and that library can be used to author new diagrams. This feature can be used to create libraries of shapes with pre-configured semantic mapping.
This section compares three approaches to architecture practice which can be put on the “formalism continuum” as follows:
The below table provides a side-by-side comparison of the approaches for each architecture practice elements. Subsequent sub-sections provide a more detailed explanation of each approach, pros and cons, and when to consider moving from one to another.
When you read this section keep in mind that "All models are wrong, but some are useful. Any model is wrong by definition because they are descriptions of particular aspects of reality. If a model is right then it is not a model, it is reality.
What is useful is dependent on the context and the goals. In particular, reality and models/metamodels with a high level of details are not useful in many cases because they are too complex. You need to strive for the just the right level of complexity/simplicity - “Everything Should Be Made as Simple as Possible, But Not Simpler”.
Component | Informal | NASDAF | UML |
---|---|---|---|
Metamodel | Informal/tacit. May be documented, but the documentation does not constrain the model through automated means - it is up to modelers to create models compliant with the metamodel. | Formal metamodel with types (classes) specific to the organization. | Formal metamodels with types (classes) specified by a notation being used, e.g. UML. |
Model | Informal - tacit/tribal knowledge. | Formal model compliante with the metamodel | |
Viewpoint | Informal - diagram types used in the organization | Formal - HTML generators | Formal graphical notations (diagram types). |
View | Informal - PowerPoint, Visio, Drawio diagrams in Confluence pages | Informal - Drawio, formal - HTML pages | Formal - diagrams compliant with diagram types. |
View to model cardinality | Undefined, as there is no formal model | Many-to-many for Drawio semantic mapping with namespaces. The same diagram file (resource) can be used to load different models. For example, an architecture model to communicate system functionality, implementation model integrated with work tracking system to communicate system development progress, and a runtime model integrated with monitoring solutions do communicate system status, e.g. failed or overloaded components. | Many-to-one |
Repository | Distributed and disconnected - filesystem, OneDrive/Sharepoint, Confluence | Distributed and connected - YAML, JSON, Excel, Drawio, XMI files in version control, integrations | Tool-specific (opaque) repository - MS Acess database file, DBMS, Cloud. 1 |
In the low formalism scenario there is no formal metamodel and there is no model. Or the metamodel can be formal, but not traceable from architecture artifacts - all architecture stakeholders are expected to know the metamodel and the model by heart. There are many architecture artifacts mentioning, but not referencing, the same concepts (model elements).
In the diagram above “ABC Microservice” and “XYZ SOR” are mentioned, but not referenced. A person looking at the diagram is expected to possess contextual knowledge to understand what is depiced and where to look for additional information.
An informal practice is a good fit for situation where there the informal metamodel and model are well-understood in the organization and the level of changes is low enough - the organization/team is stable, new people have time and access to resources to learn about how things are done.
Some indicators when this practice becomes a misfit:
One of situations when such a misfit happens is when a small team develops a solution which is then positioned as a shared/reusable component/asset or a platform on which new solutions should be built. The team grows and what worked well for, say, 10 people doesn’t work anymore for 50 core developers and hundreds of developers who are told to use the shared component/platform.
Comparing to the other two approaches, NASDAF requires investment into development of a metamodel and viewpoints.
It is perhaps an overkill if architecture is developed and maintained by a small group of people and architecture consumers either possess contextual knowledge to understand diagrams not backed by models, or they are knowledgeable of the underlying metamodel and notation, e.g. UML and its diagrams.
It is a good fit if:
With industry standards such as UML you get a metamodel and viewpoints out of the box. It is a good fit if people consuming architecture artifacts “speak” UML and be able to access architecture documentation. People contributing to architecture development have access to and know how to use the modeling tool.
However, the tool shapes you instead you shaping the tool. You may need to find a way to capture organization-specific concepts. In UML it can be done with, say, stereotypes and tags.
If the above conditions are not met, UML may become a burden:
Adoption of NASDAF has 3 dimensions as shown on the diagram below:
The value to the organization is proportional to the volume of the 3-D shape reflecting the adoption progress. However, adoption just along the practice and metamodel dimensions still brings value. On the diagram the shape is a rectangular cuboid, but is doesn’t have to be as it will be explained below.
You can think of the three dimensions using the following metaphor:
The metamodel describes organization’s architecture problem domain. Organization’s metamodel can be an extension of an existing metamodel, e.g. Ncore or NASDAF Model.
Also, initially there might be no organization’s metamodel - you can start with an existing metamodel and then elicit metamodel from the model. E.g. if you find that in many cases architecture documentation contains “Contacts” section, then you should probably capture it in the metamodel and then migrate “Contacts” from the documentation to the model.
The opposite is also true - initially there might be no model, only a metamodel to drive common understanding of concepts and relationships within the organization.
A few ideas regarding metamodel evolution are outlined in the below sections
You can start with a taxonomy of “things” using Composite. E.g.
You can use block diagrams to define and represent the hierarchy of composites.
Drawio Semantic Mapping Demo is an example of such a taxonomy.
The next step after building a taxonomy is ontology - start creating classes with attributes (properties) and relationships.
Both capabilities and building blocks may have versions. Versions may have temporal properties. They may extend Period or have have lifecycle phases extending period. E.g. “Planned”, “Development”, “Beta”, “GA”, “Aging”, “Retired”.
Capabilities/building blocks metamodel would allow to build “Supply chain” views showing a graph of building blocks and provided and consumed capabilities.
Using an example from the taxonomy section:
Separation of capabilities (the WHAT) from buidling blocks providing them (the HOW) allows to define solution patterns in terms of capabilities and then build pattern embodiments by selecting building blocks providing those capabilities.
E.g. “Drink - Appetizer - Main Course - Dessert” is an offering pattern used by restaurants. And a menu is a catalog of building blocks providing “Drink” and other "capabilities.
Supply chains may be used to identify risks. For example, you may define a capability “Expertise with ABC component” and ask people in your organization to list their expertise in a form of provided capabilities (it can also be modeled differently). Then you may discover that a particular building block is rather critical because it is used by many offerings, but there is only a handful of people who have expertise with it.
A concept of a “Work Package” can be introduced to capture what needs to be done. A work package would have relationships with components it impacts - where work needs to be done, and with parties doing the work, e.g. teams, vendors. Work packages can be nested (WBS), have phases, similar to versions above. E.g. “Backlog”, “In Progress”, “Completed”, “Deployed”. Such phases may have temporal properties, e.g. extend Period. Or a work package itself may extend Period. Work packages may also have relationships with organization’s units of funding and work items in a work tracking system. One work package may correspond to multiple work items.
Work packages may be modeled as flows.
In organizations which have multiple divisions/business lines and operate in different regions there might be variations between how things are structured. One example would be US and UK mailing addresses. They have common parts - Street and City, and different parts - State and Zip code in the US address and postal code in the UK address.
A way to manage this variability is to have a core metamodel which contains common classes and structural features, and region/business line specific metamodels extending the core metamodel. In such a hierarchy metamodel elements can move vertically and horizontally. A vertical move is pulling generic things up and pushing specific things down. A horizontal move is “borrowing” or “copy-paste” - one part of organization may leverage things from another part without pulling them up the hierarchy (yet).
Situations where there is more than one hierarchy/dimension of variability (e.g. regional and business line) can be handled as follows:
If needed, you may introduce a concept of suppressing features and classes using annotations and validations. Imagine that there is a particular attribute that is used in a majority of cases. Say, street name and number in addresses. And maybe there are regions where street names and numbers are not used. This situation may be handled in several ways:
TOGAF has a concept of Enterprise Continuum to classify how reusable are different assets, e.g. solution building blocks.
Concepts of the enterprise continuum can be introduced into the metamodel to drive higher and more disciplined reuse.
TOGAF’s enterprise continuum defines the following classification “buckets”:
The organization-specific bucket can be divided into sub-buckets and that division is sometimes not linear, but rather a multi-dimensional hierarchy as exemplified below:
With multiple dimensions of the continuum, reuse becomes a game of multi-dimensional chess - how do I occupy a particular sub-bucket if I already have assets in such and such buckets?
This section outlines how to introduct enterprise continuum and reusability into the metamodel and the model.
Building blocks can be defined in the capabilities taxonomy/catalog or they can be contained by the model element representing continuum dimensions. For example, a building block created for a specific solution can be defined “under” that solution. It may be moved to the taxonomy catalog after it was made reusable.
One technique which can be used to make building blocks more reusable is refactoring a building block into several blocks by separating generic and specific functionality. The generic part would provide “extension points”, e.g. methods which can be overridden, or configuration properties. The specific part would provide “extensions” for these points, e.g. override methods. It may also in turn introduce its own extension points.
Along with the enterprise continuum concepts you may introduce a concept of “source continuum” to help with collaborative/inner source development. Elements of this continuum would include:
Dynamic viewpoints can be created using the following techniques:
The two techniques can be used together.
So far we’ve been talking about generation of documentation from architecture metamodels and models. As the metamodels and models become more and more detailed at some point it may make sense to use them to generate (instantiate) solutions.
Solution instantiation may include the following steps:
The above steps may be automated or manual:
In some cases it should be possible to execute the instantiation process multiple times and it will apply only changes. It is also implement code generation is such a way that it will merge generated code with manual modification made in the originally generated code. This functionality is available for Java and can be implemented for other artifact types.
Code generation/solution instantiation may be implemented in several ways, some of them are listed below:
.ecore
and .genmodel
.These approaches may be combined.
You may have different solution instantiators or instantiation profiles for different contexts/environments. E.g. for a local environment a solution can be instantiated with stubbed/mocked dependencies and, say, embedded databases.
You may also instantiate several embodiments of your solution with different configurations to perform a side-by-side comparison - this can be useful for evaluation of new approaches and technologies (POC’s).
Drawio user libraries can be used to facilitate solution instantiation. User libraries may contain:
One of definitions of software development is “Incremental binding of decisions to make them executable”. Techniques explained in this section allow to bind decisions in a traceable and auditable way.
As it was mentioned above, you can start building models by using an existing metamodel, e.g. Ncore Composite. Attributes and relationships not supported by the metamodel can be captured in documentation. Then elicit organization-specific metamodel by analyzing the model. And then gradually upgrade models to the new metamodel classes.
Say, in Release 1 a maven software component can be documented using Composite. In Release 2 it may be changed to a Building Block providing and consuming capabilities. In Release 3 it may be changed to a Maven Component class with information loaded from both YAML spec and pom.xml
.
The projection of the adoption progress to the Model & Viewpoints / Model & Views plane is a rectangle only in the case when all newly released metamodel classes and features get adopted at the same time. Because of possibly gradual adoption of the metamodel classes and features this projection is not a rectangle, but something between a rectangle and a triangle (keep in mind, though, that this projection is not an exact science, but rather a metaphorical visualization).
One important outcome of establishing even a simplest model such as a taxonomy is that diagram elements now not just mention, but reference model elements and these references navigate to the same page. E.g. if two diagrams have “XYZ SOR” diagram element then a mouse click on both of those diagram elements shall navigate to the page describing “XYZ SOR”.
Another outcome is that after establishing a taxonomy it is possible to create a publish Drawio user libraries with pre-configured diagram elements. E.g. a library of organization’s SOR’s or a library of business capabilities from which offerings can be built.
You may start with a model in a single version control repository. As the model grows it can be split into several repositories. Models from different repositories can be “stitched” together in using the following techniques:
https://raw.githubusercontent.com/Nasdanika/core/master/core.yml
. This is similar to Git submodules, but doesn’t involve cloning the referenced repository. It may be beneficial in situations where architecture models live in the same repository with a software component they describe and the component is rather large.gitlab://
scheme to retrieve resources from GitLab using gitlab4j-api. This is similar to raw URL’s.classpath://
URI scheme.As it was mentioned in the requirements section above, to start with NASDAF in your local environment you would need a text editor or Drawio editor, Java 11, and Maven. This would be your starting point from which NASDAF practice can be evolved as explained below.
While a plain text editor would do, it is better to have an editor with syntax highlighting, error detection, and parse tree outline. For example, Eclipse IDE for Enterprise Java and Web Developers has a YAML editor plug-in.
You can start with Drawio Confluence plug-in, if you are in a corporate environment and your company uses Confluence. Or you can use the online editor or an intranet-hosted editor.
The desktop editor, however, is a bit easier to use to edit local files.
An intranet-hosted editor may be integrated with your version-control system.
You’d need Ecore tools at a minimum. They are available in Eclipse IDE for Enterprise Java and Web Developers and Eclipse Modeling Tools packages.
As your practice matures you may create your own graphical workbench using Eclipse Sirius. You may also explore other technologies mentioned in the “Metamodel” section above.
You can start with sharing of archived sites or committing of generated documentation to version control for other people to check-out and view locally.
Another option is to publish generated files to a shared drive.
If you have OneDrive, you can set up your build to output HTML files with .aspx
extension and publish them to OneDrive.
If you have GitHub or GitLab pages, you can publish documentation there.
If you have a web server, an automated build can be set up to generate and publish documentation.
The most flexible way, though, is a publishing service (which can also collect web analytics). Such a publishing service would serve files from your version control repository for any branch, tag, or commit. This would simplify working with feature branches and cross-referencing of model resources.
Solutions for sharing of generated documentation may be used for sharing any static documentation, including video tutorials (see below).
The publishing service may also perform rendering. E.g.:
my-document.md.html
if the file with this name does not exist then my-document.md
would be rendered to HTML and served.Package documentation generation and solution instantiation logic can be packaged into Maven plugins.
You may also make generators and instantiators available as CLI tools, e.g. using picocli.
Some logic may be packaged into a helper (web) services. For example, access to surrounding systems such as version control and issue tracking can be handled by helper service. Code generators such as Maven plug-ins would authenticate with the service using service-specific authentication tokens and the service would authenticate with the surrounding systems using mechanisms supported by those systems.
Video tutorials created as explained in this Video Tutorial Demo can be integrated with the generated documentation.
If you decide to leverage Eclipse technologies like Sirius and Xtext you’d likely to need to establish an Eclipse Ecosystem if you don’t have one in place already.
An ADK can be used to develop new architectures on top of the existing architecture pretty much as SDK’s are used for application development.
An ADK may include:
https://mycompany.com/adk/<yyyy-mm>/
), as a downloadable archive, or as part of the ADK package.Metrics can be rolled-up along containment hierarchies or ownership hierarchies. Such hierarchies can also be used to build visualizations such as heat maps.
With NASDAF you start small - download the demo project, modify it your needs and add your diagrams and models. This project would become a seed crystal from which you will grow your architectural practice gradually expanding the three dimensions - Model & Views, Metamodel & Viewpoints, and Practice.
See Model Repostory
↩People don’t want quarter-inch drill bits. They want quarter-inch holes.
↩