The primary target audience of the Nasdanika product line are Java developers who use Maven.
The main theme can be expressed as “Efficient knowledge capture and dissemination” using Domain-Driven Design, Model-based Systems Engineering (MBSE) and code generation being the means to efficiently codify knowledge and then disseminate it.
Well, it’s a mouthful! Let’s elaborate a bit!
First of all, “efficiency” above means minimal requirements of human and computing resources per “unit of knowledge”. It also means hyper-local and hyper-focused experiences because humans are notoriously bad at context switching. One of approaches to minimize context switching is use of code generation using Java, e.g. generation of Bootstrap Web applications using fluent Java API. Another one is an option to author models in YAML without having to use a specialized model editor.
Computing resources-wise - no need in server components, you can use Nasdanika products on a flash drive in a bunker or in the wilderness with no network:
--everything-is-hyper-local, paraphrasing the Git motto. Access to the Internet would be needed only to pull the required libraries from Maven Central or a mirror.
You’ve got an idea of a new “something” - not necessarily software.
Or you can start with a diagram an then derive a model from it and map diagram elements to model elements.
With the engineering model you can define consumers of what you want to create - personas - and their goals. This will allow you to better understand and communicate the Why of your idea.
It will also allow you enlist help of others, e.g. potential users, to elaborate the personas and their goals - they likely know better than you what they want, unless you are building it for yourself.
Then, define releases, issues, increments, engineers. Assign features to releases, issues to features or releases, and releases to increments. Assign issues to engineers. You may also assign responsible engineers to releases, features, and increments. This would define the How and the When both traceable to the What (features) and the Why (persona goals).
As it was mentioned above, all of that can be done on a local computer - you don’t need any server component, you don’t even need to keep it under version control, although it is a good idea. So, if you have a super-secret idea you can keep everything on an encrypted thumb drive and work on it at night in your basement. In a team environment you can use an isolated network to collaborate.
One more advantage of
everything is local is that you may produce self-contained idea definitions which can be archived, copied/forked etc. It allows to establish a discovery-delivery pipeline where many ideas can be brought to high enough level of detail to allow efficient selection for delivery later. Having shared definitions of personas, goals, principles, and objectives would simplify comparison of ideas based on their benefit and cost.
There is more to it, see the engineering model documentation and a list of features for more details.
With Nasdanika Flow you can model customer journeys and team processes. Services allow to build flows “in terms” of other flows, e.g. a customer journey flow referencing an internal process as an activity. Inheritance allows to define common flows and then customize them. For example, in an IT organization there might be core development process and then technology and regions specific extensions.
The flow concepts are very similar to the concepts of a programming language, such as Java, or of a distributed system, such as a cloud application. In other words with Nasdanika Flow you can “program” or “codify” operations of an organization.
The journey package in the engineering model (work in progress) aims to provide a bridge between the flow model and then engineering model in order to allow to define flows at persona level, reference personas from participant definitions, products/features from resource definitions, etc. With that it would be possible to build persona (customer) journeys in terms of product features and services provided by the organization and its engineers.
Problem domain structure is captured in a form of EMF Ecore (meta) model using Ecore Tools, which are part of Eclipse Modeling Tools Package. You can use a diagram or a tree editor to create domain models.
Flow and Engineering models mentioned above are examples of domain models for different problem domains.
Domain models serve the following purposes:
You’ve probably heard before that “all models are wrong, but some are useful”. Models are wrong by definition because their purpose to reflect an aspect of interest of reality.
If we define usefulness as positive ROI and short time to market, then with Nasdanika approach to modeling it is easier to create a useful model because production and publication of a model requires less effort than alternative approaches as explained below.
Use Ecore Tools.
To programmatically read or write models.
One area where Nasdanika approach to modeling may be useful is disposable/situational models - use a modeling approach to solve a problem which is small enough to justify investment into graphical editors, p2 hosting etc., but large enough to benefit from automation.
Metaphorically speaking, Nasdanika uses r-selection - create cheap models quickly, see which one is useful and then invest more if needed. The alternatives tend to be closer to the K end of the spectrum requiring larger up-front investment.
Once there is a domain Ecore (meta) model you can start creating model resources containing definitions of model elements. E.g. if your meta-model contains classes Organization and Engineer,
you may define organization, “Acme Corp.” with an engineer “Joe Doe”.
There are multiple ways to populate the model. Some of them are listed below. You don’t to use one particular way - they can be used in combination. Some resources in your model may be defined YAML and edited in a text editor, some other may be defined in XMI and edited with, say, a diagram editor. There resources can cross-reference each other.
You can create models in YAML, see EMF Persistence. With this approach, unlike the others listed below, you don’t need to create a specialized editor - just use a YAML or text editor and load specification pages of the generated model documentation as a reference.
Because models are defined in text, as-code, they are easy to collaborate on without the use of specialized merge tools - use traditional diff and merge as for other textual artifacts such as Java files.
This approach also allows to relatively easily load data from external systems which have no knowledge of your models’ XMI format. Data can be loaded from REST endpoints or from data exports.
You can generate a tree editor for your models using Ecore Tools. It is a relatively small effort, but you’ll need to build it, publish to a p2 repository (update site) and model authors will need to have Eclipse IDE and to be able to install the editor.
With Xtext you can create a specialized text editor with syntax highlighting, code completions, live validation and other features.
In Domain-Driven Design terminology such resource and a resource factory would act as an Anti-Corruption Layer.
So you’ve captured quite a bit of knowledge in your models, now it is time to share it! One way to share knowledge with humans is to generate a web site. For example, this site was generated from multiple engineering models stored in multiple Git repositories with documentation generated from Ecore models “mounted” to it.
You can add dynamic read-only behavior to the generated site using Single-page applications which use model data. All issues page is an example of such an application built with Vue.js and BootstrapVue. It uses browser local storage for user preferences. This approach allows to have highly focused mini-apps injected into a static web site.
Shall you need more dynamic behavior you can use helper web services. In this case model data can be used as input to the services in addition to user-provided data putting user input into a context. E.g. a single site may have multiple mini-apps calling the same service but with different inputs.
Model data can also be used by computers. In Java you use the generated model classes. You may load your model from multiple sources and then save it to XMI which can be published to a binary repository or on a web site.
Or before, or in parallel - totally up to you. Start somewhere, expand from there.↩