Approach is an emerging open standard bringing software concepts into physical & social settings, as well as innovating the development of traditional software & AI. More formally, Approach is a set of constraints, concepts and microlanguages which integrate together and universalize the requirements of scalable, deeply dynamic systems. Each microlanguage form an Approach “layer”. Similar in some aspects to the TCP/IP or OSI models, Approach layers originate from fundamental primitives and build up levels of abstraction. Certain terminology has been carefully debated over 17+ years. While perhaps not the most common
Unlike either TCP/IP or OSI, Approach starts outside of software, at the most basic units of achieving any goal: work. The separation of concerns found in each layer ensure that dependency issues extend no further than adjacent layers; enabling effective roles, engineering and operation.
With software specifically, Approach drills down the concept of libraries to a novel degree. This produces an effect similar to the standardization of lumber sizes on the construction industry: interchangeable materials, strategies and structures. Individuals may learn a small set of skills on one layer, and use this to navigate knowledge gains on others. This feature is critical for integrating the coming tsunamis of technologies in the so-called “Singularity.” Together, all layers form a network of interactions and a wide variety of emergeant benefits occur, including solving the generalization problem of AI on the dataset side rather than models.
The precusor to Approach was a small library to stream simple wire protocols, around 2002. The basic methodology was incorporated into a visual effects and particle rendering engine and eventually re-adapted to web and manufacturing projects in 2009.Two years later, Approach was born out of a massive refactoring and generalization process which enabled more efficient communication between engineers and non-technical team members.
Since then, Approach was stress tested and evolved in over 300 projects, slowly climbing its way up the layers of abstraction until spanning the full concept of any possible system. Over 80 private developers have worked on Approach over its history. Late 2022 saw the beginning of the Approach 2.0 overhaul creating a full-stack, resulting layer pointcuts revealed that we had actually been revolving around a set of central concepts which would require broader feedback and contribution to fulfill.
Approach considers an “orchestra” to be the total set of “instruments” (or devices) within a trust realm, along with hierarchical “ensembles” of said instruments, and the players, or users, of said instruments. Whether they implement the Approach standard or not, all platforms and organizations form implicit trust roots and set permissions on access to facilities, instruments and resources.
The Approach Orchestration model incorporates Role-Based Access Controls across its rich descriptive fabric and network topology. This simultaneously enables Comand & Control automation as well as realtime auditing, quality assurance, discoverability and more. Fully implemented, every instrument reveals a manifest of services and composite production, reachable within the orchestra.
Within the computing domain, servers are often the instrument of import for developers, with the ensembles being various workgroups, clusters and subnets generally. For a machinist, instruments may include any tool or machine used to accomplish work. Orchestras can be embedded or referenced from other Orchestras, where identity information or trust roots change. For example; a smart vehicle is an orchestra of its own but may exist as simply a building block for a commercial fleet, in the Logistics ensemble of some organization’s Orchestra. Where these pointcuts are made depend entirely on the context of identity and trust. All standard Approach Orchestras may serve as a source of Resources for other orchestras (standard or non-standard).
The standard prescribes a variety of standard semantics to enable seemless sharing and integration. Project which implement this standard are able to register services, products, media libraries and other artifiacts with Orchestration Syndicate or another syndication root.
Any artificats or processes implemented according to the standard are addressable, exchangeable and can be integrated with other projects with compatible context. Approach strongly incentivize industries to maintain a collective library of standard interfaces, fixtures, services and adapters. All processes and productions, both physical and digital, may be discovered and syndicated to other Approach projects.
What HTTP does for web media, Approach can do for everything. All compeitive technology eventually becomes commonplace and therefor a drag on companies to maintain. Gradual standardization allows turning overhead into opportunities by offloading old technology to a standard extension when no longer competitive. With syndication, innovators can become highly trusted sources from which the rest of the industry forks, providing new partnership opportunities as previous competitor integrate the technology while offloading costly legacy maintainance to the community.
Approach provides a rich description of the State of the World (SotW) for any properly mapped system. We find that not only does the coverage and total ordering greatly enhance AI applications; Approach’s specific “approach” of using “timeline oriented programming” can be used to induce realtime contexts.
Critically, Approach is simultaneously “top-down”, “bottom-up” and “inside-out.” Projects may implement single layers of Approach according to the standard or multiple, intermixed with non-standard elements ad-hoc. This extreme flexibility while retaining key constraints provides a deep set of core truths for machine learning methods, reducing complexity and increasing learning.
Perhaps most importantly, Approach provides a roadmap for whole industries to contribute toward shared utility models and shift gracefully toward safe AI generalization.
Since Approach represent a set of progressively more abstract timelines of production, we must consider the smallest possible units of those timelines. Our fundamental starting point revolves around pure, physical “work” upon some material, resource or pre-produced media. Any goal which is accomplished de facto must involve work to transition from one state to another. Work costs (at least) energy, time, wear, risk and whatever media is being transformed.
Work happens on a continuum even if the process is quantized. More technically, all work has a leading edge or a higher-dimensional analog. Less technically, all ‘actions’ on ‘things’ have some type of direction. Approach generalizes this to work to streams. A river does not simply flow in place, but downhill along some given topology. Letters are written in a flow, programs are processed from input to output, assembly lines and conveyor belts move in a specific direction at a specific rate. Even pottery requires the potter to glide smoothly along the direction of rotation.
The core concepts of the work layer are:
While not 100% ubiquitous, almost all natural languages have some concept of rendering. Distinct from pure crafting (a discipline) or making (certain vagueries); to “render” implies a new format distinct from the input state produced by a given process.
The Render timeline begins when a stream needs to prepare its next frame along the direction of output, generally called the ‘head’ of the rendering process. This is followed by the corpus, or body, wherein the full frame is passed over, including any internal renderings. Finally the “tail” end of a render timeline describes steps required before becoming free to render the next elements of a stream.
For multi-dimensional processes there are some specific nuances, however the linear timeline remains the focal point. Staying relative to the timeline enabled any render to be effectively be “a sequence of work applied to a specific configuration of input media expecting a specificly categorized format of output media.” Factoring the format and structural requirements out of upstream concerns proved a critical step in separating and scaling the human elements of those concerns.
Imprints span the space of templates, moulds, mechanical dies, casts, tilings and similar repeated forms. They are built of renderable elements and may be seen as compound renderable formats with dynamic points of interface. The key concepts for Imprints are:
The Imprint subsystem provides an extensible Pattern dialect by which to declaratively define multi-element renderings with specific places for operations and media to be attached, welded, bolted, spliced, etc. Within a compute context, Imprint produces new complex Render types given a prepared sample output. Within non-compute context, Imprint provides the facilities of an “Abstract Syntax Tree” (AST) to physical and social systems. A common 2x4 wooden board is an example of a pattern with the process of fulfilling said pattern as the minted format defintion. Similarly, any compound UI element in a web context may be represented as such a pattern, producing a type class which faithfully reproduces the pattern. Understanding Imprint is a deep iceberg chart. While on the surface appearing similar to many templating engines, Imprint takes special care to preserve a certain concept of physical adjacency and alignment.
An imprint, with its interfaceable points, fit together flush perfectly just as a handle and a hand, a lock and key or a footprint in sand. We use this total covering and adjacency concept to great effect in scalability and in subsequent layer functionality.
Resources, as the term suggests, are always “repeated” in a sense from some “source.” All formatted media and output from any system may be considered a rendering of that system, whether formally or not - and may admit some imprinted pattern whether formally or not. The Resource layer handles the mapping of such media, from both internal and external sources, into the current project context. Resources have a few extra considerations due to their requirement of having external connections. Their primary timeline consists of:
Domain specific concerns extend the Resource considerations in specific ways, allowing for robust ecosystems of operations, materials, documents and media while continuing to integrate with upstream layers. Aside from its core timeline, Resource exposes several aspects for discovery, metadata and other needs reminiscent of ORM systems for databases, yet extensible to warehouse, factory and office needs. Resource Aspects:
Resource searching:
Resource, along with the Orchestra layer, represent the most active current ongoing discussion and overhauls. We expect Approach v3 to have significant upheaval; however we have developed a mapping technology for seamless transition for breaking changes from major version to major version.
We strongly encourage feedback here but warn that the current concept set is not arbitrary and has deep concern mitigation. Understanding the deeper theory behind the Resource layer is required to meaningfully contribute to the discussion. Users of Resource however will find its interface quite intuitive, providing file-like access to complex data and physical resource connections.
Further, any external Approach Orchestra may automatically be consumed by a project as a Resource, which will map discoverable types onto the local installation. Resource’s predecessor, Dataset, was much more crude yet had been used to great effect in logistics, big data and supply chain environments.
The Component layer handles exactly what the name suggests, components - in the natural language sense of the word, “a constituent element, as of a system.” (from: The American Heritage® Dictionary of the English Language, 5th Edition.)Similar in many ways to the next layer, Compositions, as well as a previous layer, Imprints, Components specifically know how to map Resources and local information or media into minted patterns. Relying on the standardized interface of said patterns and resource maps, Component follows the following timeline:
Similar in many ways to the MVVC pattern’s “view controller,” Components also take a further step to consolidate dependencies and create two-way reflection to underlying resources. Components provide an important split between internal systems and consumer-facing, published media. Whereas traditional software processes may link variables, class properties and fields directly to some output; Components specifically represent the state of interfacable points within the patterns it builds, often called tokens.
The underlying references filling those tokens are arbitrary yet tracked by Components. For web and connected works, this enables robust injection proofing, safe external addressability of internal resources and deep reflection. For other contexts, Component enables similar forms of quality assurance, process validation and portable modularity. Distinct from MVVC, Component allows deep trees of dependency in both sub-components and/or used Resources. Reversal of its mappings allow Components to be driven primarily by simple external tools and greatly reduce human error. Another instant benefit to the Component definition, built atop previous layers, is implicit integration and functionality testing.
What precisely constitutes a Composition can be difficult to discern without having worked in Approach over two or three projects. For compute, the distinction is fairly simple as any URL endpoint is an instance of some Composition type. For a factory, the file packaged, ready to ship product will tend to be the target.
Ultimately, Compositions are the primary structure within which Components are placed, likely layed out embedded within some renderable form specific to that type of composition. There are times a simple rendering is more than enough and the complexity of Components or Compositions are not required. When dynamic and composite process are at play, an instance of a Composition type is always the final output to some external triggering input. A Composition timeline primarily consists of
Services are general purpose pipelines which take in any form of stream(s), decode stream(s) to a standard format, perform some process upon such stream(s) and encode them to a new, potentially distinct, format. For simple tools, a service is really an interaction with some interface from external signals. Leaning on our earlier example of handles, even a hammer or screwdriver admit certain services such as equipping (or forming a connection to) the hammer, using the hammer to drive a nail and prying a nail loose with the hammer.
While not the most natural use of the Service in all context (indeed we would like to have dubbed this “Interface”, however this is a reserved word in too many compute contexts…)- Service, regardless of name, just narrowly managed to encapsulate what it means to have an interaction. Especially regarding interactions incorporating external influences. Service’s extensible semantics include built-in understanding of incoming or outgoing flows, format-to-format casting, and branching process timelines during the course of an interaction. Resource’s Connectors are implemented as Service layer types, for example. Aside from this, Services have the capacity to reflect on any Approach concepts without the usual expense of Reflection, by navigating the layer structures. Service Timeline:
We have discussed hammers, assembly lines, document writing, web servers and other devices on which the above mentioned Services, Compositions, Components etc are rendered and which ultimately perform some type of work. Such tools are instruments. What makes them instruments is the condition of being tools that admit services by which to achieve goals. The collection of all instruments is an Orchestra’s interfaceable infrastructure.
Instrument timelines are an emergeant affect of all the work and throughput occuring with/within said instrument. Instruments are used, or played, by Players, entities within the Orcchestra. This gives rise to our first notions of “Rhythm” and “Pitch.” Based on output metrics, frequency variation and run volume compared to capacity; Instruments distill a variety of healthchecks into musical analogs. While not all metrics must strictly meet this criteria, such analog is close enough to produce actual auditory feedback such as MIDI.
The ability to hear how smoothly, choppy or bottlenecked complex systems are running is the result. While we cannot always absorb a mass of complex statistics at a glance, we are very capable of distinguishing timing and pitch which do not harmonize quite quickly. This further enables operational administrators to isolate and triage problem areas with much reduced effort. Instruments also function as a sort of DNS endpoint within our private domain schema. All Composition URLs and Services exist along some instrument or set of instruments; exposing their capabilities. The Approach reference implementations provide automation for deploying several compute instruments.
Groups and hierarchies of instruments and/or their players are called Ensembles. Examples include offices, dacenters (clusters and sub-clusters), the dashboard of a vehicle, a group of IoT devices, and so on. Ensembles specifically always refer to groups, even in the case of the implicit group of 1 instrument with 1 implicit service with 1 implicit Composition … to a single rendering. Organizations and logistical groupings often require several layers of their pure categorization between the apex of the network and underlying instrumentation.
Ensembles are flexible to accomodate these unknown groupings. One orchestra may well be considered an ensemble of another. The rythm and pitch concepts across an Ensemble’s constituent instruments are aggregated, monitored, reported and scaled via Ensembles.Resources shared between Ensemble instruments avoid a variety of duplicated efforts and bottlenecks through the use of spooling, cross-referencing and total ordering.
Significant automations are achieved by judiciously selecting the pointcut of Ensembles used across an organization. Most projects are deployed and run as an Ensemble. As with Instument, Ensemble timelines are simply a result of the activity of lower layers.
The highest layer of abstraction is the Orchestra. Each Orchestra is a self-contained intranet with its own PKI, Trust Zones and Certificate Authority. The Orchestra represents the totality of some entity. This may be a university, a corporation, the body or some other complex system. Always, the primary distinction revolves around authorization, identity and trust.
Orchestras are able to command and control all underlying Ensembles, Instruments, Services, etc via the Conductor - a privileged Instrument or Ensemble above the authorization of a standard admin. A standard Approach Orchestra supports X.509 certificates, Single Sign-on, Federated Identity via OIDC and/or SAML. Standard Orchestras always implement https://secure.private https://orchestra.private and https://conductor.orchestra.private, generally in a bastion-style, ultra-hardened environment with certian offline procedure requirements.
Orchestra-to-Orchestra connections are incredibly powerful and enable the syndication network which we seek to build. Through these mechanisms we can provide a decentralized, identity-based Internet-of-Internets completely agnostic to TCP/IP or Web3 technology choice.
Orchestration Syndicate is presently working on UI tools and reference implementations to provide fully working Orchestras, acting as a multi-device operating system while running on any underlying OS, whether for compute orchestras or for exposing non-compute systems to connected networks.
Players, the reader may have surmised, intentionally conflates the concepts of one who plays a musical instrument and one who plays a game. This cross-section proves of great utility for security and trust models.
Players are seen as (generally) opaque orchestras of their own. These may be other literal Approach orchestras, some external organization interacting via API, a person or other living entity, an AI system or even discreet software running within a given Orchestra.Players are assigned role authorizations within the context of layers, layer types and instances of those layers. Some players may not have access to alter and save certain Resources or Components while others are granted such permissions.
The total set of instruments, the infrastructure, is mapped 3 different ways, each totally covering the entire set using different schemes. Permissions must authorize players to perform any operation on any instrument, and agree across each mapping.
For example, Player Alice may have authorization to read all Resource layers, but some projects have private Resources which are not visible to Alice. Similarly, player Bob may have “create” authorization to all Components in “CoolProject;” but if CoolProject uses an instrument marked to deny all such access except by the conductor(s) Bob will only be able to access other project Resources.
Thanks to Approach’s coverage of concerns and wide-spanning integration, this provides immense value towards integration of AI agents. We can enforce similar rulesets that would apply to human actors within an organization at several levels of granularity and quality control without exposing the ability to hijack anything and everything.
Evergreen Goals × Action Plans × Agile Sprints × Role-based Access Controls
Where the layers standardize engineering, design and operational constraints; climbs push agile toward a new aspiration for player interaction. After deep auditing of several variations on traditional agile “sprints” and general scrum; we believe a major deficiency of agile to be the sprint form factor itself.
Chiefly, sprints are not evergreen or semi-synchronous, but real life goals are. Especially in the workplace. Goals exist regardless of periodic iterations. Requirements do not fit in 1 or 2 week timeframes and forcing them to will almost always create either rushed quality or wasted time. Action plans, a concept which has been around for much longer, are less robust than sprints but have the desirable feature of being a step-wise, blocking, asynchronously initiated timeline.
Having merged these two concepts, we enforce a small set of required steps to break any large goal into recursive trees of sub-goals, similar to agile epics, with some extra considerations along the way which ensure proper roles. A nice effect of this is a new teaming process which exports several managerial concerns to the system this “human algorithm” produces.
Essentially we have removed the need for middle management and authority over other people, without taking away the high value they provide to the team and the prestige of leadership. This result is achieved largely by transfering authority to overseeing the climb process rather than the members. Evaluation of performance is handled elsewhere in quality control and administrative climbs. This also forces an organizational awareness that success or failure often is a matter of initiative and team selection rather than performance at all.
While climbs are not strictly required in an Approach Orchestra, we are building several tools to enable their integration into modern workflows. Further, they provide immense assurances on process scaling, auditing and viability. Most importantly, climbs represent a step-wise planning protocol which AI can be trained to both implement and verify. Not to mention they are secretly Turing complete themselves.
Orchestration Syndicate’s “Sympathetic Cortex” project is creating a comprehensive testing suite by which both fine-tunings and custom models may validate various aspects (quantities, qualities, locations, fields, operations, authorizations,..) are satisfied along any step of a Climb in a specific context. We will provide certification to any such models similar to SSL certificates, enabling a novel feedback loop of increasing AI safety incentives. Prove your model’s effectiveness and trustworthinesss in a context domain transparently – allowing those within a syndication network to rest easy buying and integrating your AI services.
For syndication to happen, root orchestras must exist outside of Orchestration Syndicate.. We aim to create root rings for each industry, with a particular balance between official Approach orchestras, institutional, community, vendor and public/”wild” membership. These orchestras primarily house services for registering, storing and distributing layer packages as well as published media, products and APIs.
Note: the syndication network shall support several distribution models, from free and open software to point-of-sale, subscription, license, whitelabel and more. Notable, this model allows installation of SaaS-like services on local systems. We hope to free innovation from infrastructure constraints, vendor lock and increase true software ownership while bringing sales and funding to open projects in our ecosystem. Wehave also forged certain partnerships to aid entreprenurial founders in transitioning from passion project to business with expert sales, grant writing, OSC’s compute fabric and tooling.
The original goal of Approach v1 was simply to create scalable render farms for high-performance computing. The streaming, pooling, multiplexing and load-balancing mechanisms Approach uses across layers provides a blueprint to scale any feasible physical process - technically, economically and operationally.
While the entire course of scale concerns is far outside the scope of this document; the reference implementations of Approach shall stand as turnkey, auto-scaling systems. We are nearing completion of our automated setup for web clusters and will next move on to both AI and manufacturing contexts, creating similar project scaffolds.
If we can reach wider adoption, Approach’s unique organzing and engineering constraints will enable a small number of developers in each industry to expose a tremendous depth of capabilities.
Adding a single new format to Approach’s existing library enables the cloud computing of new media instantly. Creating a new Resource type connector, while rather advanced, enable anyone in the world to automaticly connect to products and services.
As our network accrues more definitions across the layers, and more uses cases of those definitions, it appears an interesting convergence will begin to take shape. Approach will always seek to unify as much as possible in the core layers as long as this is at no cost to universal flexibility. Similarly, Approach will seek to encourage and incentivize domain-specific specialization of layers into official, community and vendor extensions
Providing common ground to describe not only modalities, but complex relationships, planning and time, Approach stands to become the catalyst for a safe ride through the Singularity.
Contact us