Nasdanika Capability framework1 allows to discover/load capabilities which meet a requirement. Capabilities are provided by CapabilityFactory create()
method. Capability factories may request other capabilities they need. As such, capabilities can be chained. Factories create CapabilityLoaders which provide Flux reactive streams of capabilities. It allows to have an infinite stream of capabilities which are consumed (and produced) as needed. Capability providers may furnish additional information about capabilities. This information can be used for filtering or sorting providers.
A non-technical example of requirement/capability chain graph is a food chain/graph. Food is a requirement. Or “I want to eat” is a requirement. Bread and, say fried eggs are two capabilities meeting/addressing the requirement. Bread requires “wheat”, “water”, and “bake” capabilities. Fried eggs require “egg”, “oil”, and “fry” capabilities. Bread capability provider may implement Vegan
marker interface which can be used for filtering. All food capabilities may implement NutritionalInformation
interface - it can be used for filtering or sorting.
A more technical example is Java ServiceLoader with service type being a requirement and an instance of the service class being a capability.
Nasdanika capability framework can operate on top of ServiceLoader
and may be thought of as a generalization of service loading. In essence, the capability framework is a backward chaining engine as shown in one of the example below.
Capabilities are loaded by CapabilityLoader. Capability loader can take an iterable of capability factories in its constructor, or it can load them using ServiceLoader
as shown in the below code snippet:
CapabilityLoader capabilityLoader = new CapabilityLoader();
capabilityLoader.getFactories().add(new TestServiceFactory<Object>());
ProgressMonitor progressMonitor = new PrintStreamProgressMonitor();
for (CapabilityProvider<?> cp: capabilityLoader.load(new TestCapabilityFactory.Requirement("Hello World"), progressMonitor)) {
System.out.println(cp);
Flux<?> publisher = cp.getPublisher();
publisher.subscribe(System.out::println);
}
Factories can also be added post-construction with getFactories().add(factory)
.
Service requirements and capabilities provide functionality similar to ServiceLoader - requesting instances of specific type, but extend it with ability to provide additional service requirement. This functionality is provided by ServiceCapabilityFactory and ServiceCapabilityFactory.Requirement
.
CapabilityLoader capabilityLoader = new CapabilityLoader();
capabilityLoader.getFactories().add(new TestServiceFactory<Object>());
ProgressMonitor progressMonitor = new PrintStreamProgressMonitor();
@SuppressWarnings({ "unchecked", "rawtypes" })
ServiceCapabilityFactory.Requirement<List<Double>, Double> requirement = (ServiceCapabilityFactory.Requirement) ServiceCapabilityFactory.createRequirement(List.class, null, 33.0);
for (CapabilityProvider<?> cp: capabilityLoader.load(requirement, progressMonitor)) {
System.out.println(cp);
Flux<?> publisher = cp.getPublisher();
publisher.subscribe(System.out::println);
}
It is also possible to load services from ServiceLoader
using subclasses of Service. You’d need to subclass ServiceFactory
in a module which uses a particular service and override stream(Class<S> service)
method as shown below:
@Override
protected Stream<Provider<S>> stream(Class<S> service) {
return ServiceLoader.load(service).stream();
}
Then you’d need to add the factory to the loader:
capabilityLoader.getFactories().add(new TestServiceFactory<Object>());
As it was mentioned above, capability factories can be explicitly added to CapabilityLoader
or loaded using ServiceLoader
.
Below is an example of a capability factory:
public class TestCapabilityFactory implements CapabilityFactory<TestCapabilityFactory.Requirement, Integer> {
public record Requirement(String value){};
@Override
public boolean canHandle(Object requirement) {
return requirement instanceof Requirement;
}
@Override
public CompletionStage<Iterable<CapabilityProvider<Integer>>> create(
Requirement requirement,
BiFunction<Object, ProgressMonitor, CompletionStage<Iterable<CapabilityProvider<Object>>>> resolver,
ProgressMonitor progressMonitor) {
return resolver.apply(MyService.class, progressMonitor).thenApply(cp -> {;
@SuppressWarnings({ "unchecked", "rawtypes" })
Flux<MyService> myServiceCapabilityPublisher = (Flux) cp.iterator().next().getPublisher();
return Collections.singleton(new CapabilityProvider<Integer>() {
@Override
public Flux<Integer> getPublisher() {
Function<MyService, Integer> mapper = ms -> ms.count(((Requirement) requirement).value());
return myServiceCapabilityPublisher.map(mapper);
}
});
});
}
}
There is a number of implementations of CapabilityFactory
is different Nasdanika modules, most of them extending ServiceCapability
. In Eclipse or other IDE open CapabilityFactory
type hierarchy to discover available implementations.
Most of Nasdanika capabilities are based on Eclipse Modeling Framework (EMF)2, Ecore3 models in particular. One of key objects in EMF Ecore is a ResourceSet. Resource set has a package registry, resource factory registry, and URI converter. org.nasdanika.capability.emf
packages provides capability factories for contributing to resource set. It allows to request resource set from a capability loader and the returned resource set would be configured with registered EPackages, resource factories, adapter factories and URIHandlers.
CapabilityLoader capabilityLoader = new CapabilityLoader();
ProgressMonitor progressMonitor = new PrintStreamProgressMonitor();
Requirement<ResourceSetRequirement, ResourceSet> requirement = ServiceCapabilityFactory.createRequirement(ResourceSet.class);
for (CapabilityProvider<?> capabilityProvider: capabilityLoader.load(requirement, progressMonitor)) {
ResourceSet resourceSet = (ResourceSet) capabilityProvider.getPublisher().blockFirst();
}
CapabilityLoader capabilityLoader = new CapabilityLoader();
ProgressMonitor progressMonitor = new PrintStreamProgressMonitor();
Predicate<ResourceSetContributor> contributorPredicate = ...;
ResourceSetRequirement serviceRequirement = new ResourceSetRequirement(null, contributorPredicate);
Requirement<ResourceSetRequirement, ResourceSet> requirement = ServiceCapabilityFactory.createRequirement(ResourceSet.class, null, serviceRequirement);
for (CapabilityProvider<?> capabilityProvider: capabilityLoader.load(requirement, progressMonitor)) {
ResourceSet resourceSet = (ResourceSet) capabilityProvider.getPublisher().blockFirst();
}
You may provide an instance of ResourceSet to configure in the requirement.
Create a class extending EPackageCapabilityFactory
:
public class NcoreEPackageResourceSetCapabilityFactory extends EPackageCapabilityFactory {
@Override
protected EPackage getEPackage() {
return NcorePackage.eINSTANCE;
}
@Override
protected URI getDocumentationURI() {
return URI.createURI("https://ncore.models.nasdanika.org/");
}
}
and add it to module-info.java
provides:
provides CapabilityFactory with NcoreEPackageResourceSetCapabilityFactory;
Create a class extending ResourceFactoryCapabilityFactory
:
public class XMIResourceFactoryCapabilityFactory extends ResourceFactoryCapabilityFactory {
@Override
protected Factory getResourceFactory() {
return new XMIResourceFactoryImpl();
}
@Override
protected String getExtension() {
return Resource.Factory.Registry.DEFAULT_EXTENSION;
}
}
and add it to module-info.java
provides CapabilityFactory.
Create a class extending URIConverterContributorCapabilityFactory
:
public class ClassPathURIHandlerResourceSetCapabilityFactory extends URIConverterContributorCapabilityFactory {
@Override
protected void contribute(URIConverter uriConverter, ProgressMonitor progressMonitor) {
uriConverter.getURIHandlers().add(0, new URIHandlerImpl() {
@Override
public boolean canHandle(URI uri) {
return uri != null && Util.CLASSPATH_SCHEME.equals(uri.scheme());
}
@Override
public InputStream createInputStream(URI uri, Map<?, ?> options) throws IOException {
return DefaultConverter.INSTANCE.toInputStream(uri);
}
});
}
}
and add it to module-info.java
provides CapabilityFactory.
Service capabilities explained above a used by Graph and Function Flow for loading node processors and connection processors for a specific requirement using NodeProcessorFactory and ConnectionProcessorFactory respectively. For example, code generation, execution, simulation.
One of future application of the capability framework is creation a list of solution alternatives for an architecture/pattern. For example, there might be multiple RAG embodiments with different key types, key extractors, stores, … Some of “design dimensions” are listed below:
As you can see a number of potential combinations can easily go into thousands or even be infinite. Reactive approach with filtering and sorting may be helpful in selecting a solution which is a good fit for a particular use case - number and type of data sources etc. For example, if the total size of data is under a few gigabytes an in-memory store may be a better choice than, say, an external (vector) database. Also an old good bag of words might be better than embeddings. E.g. it might be cheaper.
Solution alternatives may include temporal aspect or monetary aspects. For example, version X of Y is available at time Z. Z might be absolute or relative. Say, Z days after project kick-off or license fee payment. Identified solutions meeting requirements can have different quality attributes - costs (to build, to run), timeline, etc. These quality attributes can be used for solution analysis. E.g. one solution can be selected as a transition architecture and another as the target architecture.
Family reasoning demonstrates application of the capability framework as a backward chaining engine. Family relationships such as grandfather
and cousin
are constructed by requiring and combining relationships such as child
and sibling
.
This possible application is similar to backward reasoning. Imagine an algorithmic trading strategy which uses several technical indicators, such as moving averages, to make trading decisions. Such a strategy would submit requirements for technical indicators which would include symbol, indicator configuration, time frame size. Technical indicators in turn would submit a requirement for raw trading data. A technical indicator such as moving average would start publishing its events once it receives enough trading data frames to compute its average.
A trading engine would submit a requirement for strategies. A strategy factory may produce multiple strategies with different configurations. The trading engine would perform “paper” trades, select well-performing strategies and discard ones which perform poorly. This can be an ongoing process - if a strategy deteriorates then it is discarded and a new strategy is requested from strategy publishers - this process can be infinite.
This application is similar to stream processing and may be combined with backward reasoning. Let’s say we want to train a model to answer questions about family relationships for a specific family. For example, “Who is Alan’s great grandmother?” A single relationship in the model can be expressed in multiple ways in natural language. And multiple relationships can be expressed in a single sentence. For example:
So, on top of a model there might be a collection of text generators. Output of those generators can be fed to a model:
A similar approach can be applied to other models - customer/accounts, organization or architecture model, etc.
For example, from the Internet Banking System we can generate something like “Accounts Summary Controller uses Mainframe Banking System Facade to make API calls to the Mainframe Banking System over XML/HTTPS”. “make API calls” may also be generated as “connect” or “make requests”.
In a similar fashion a number of questions/answers can be generated.
See Eclipse Modeling Framework (EMF) - Tutorial and EMF Eclipse Modeling Framework book for more details.
↩See EMF Ecore chapter in Beyond Diagrams book for a high-level overview of EMF Ecore.
↩