Skip to content

Contextual Activation

Marco Aurélio da Silva edited this page Aug 30, 2017 · 2 revisions

Contextual activation here stands for a function being called with a subjective object extension (followed by a set of optional arguments). The function itself acts as a scope, whenever we call such function, it may be seen as a scope activation. The talent used in this activation is not only seen as a contextual layer, but also as a capability providing rights amplification over some passed target object. Rights amplification is a kind of synergy on programming context, and in the Capability Model it's a form of authentication. That said, it can be used to provide a certain degree of encapsulation for the target object -- if you have such talent and such target object, you are able to access some hidden fields from the result subject, you have just to ensure that the talent is not freely available to anyone.

Backing to the subjective/contextual theory, contextual activation give us the power to model explicitly use-cases (from the Software Engineering field) on the software itself. We have just to provide unique talents for every use-case in the design, and it become straightforward (perhaps, honestly) to map specification into the implementation. By unique, I mean an almost exclusive talent, 'cause a subject is lazily computed for the interaction between the talent with the target object.

Comparing with the ML functors, talents have two semantics akin to functor application semantics. While these functor semantics depend upon the related types on the ML family, our talent semantics depend solely on the passed target object. The first semantic is called generative, and it yields fresh modules even for the same inputs passed to some ML functor. The second semantic is called applicative, and the same output module is generated for the same input module (it's the ideal, but side-effects can give rise to needed fresh modules "unsoundly" sharing an equality type constraint among functor-dependent type abstractions). If you are smart enough, you may have noticed that talent application is "generative" while talent activation is "applicative". Talent activation, so, is confined to avoid the leaking of possible side-effects from the subject, thus, disallowing some kind of global state. But be aware, it relies on the fact that both talent and target object aren't freely accessible (that is, one of them can be freely accessible, but the other should not be exposed).

Also, activations can also be seen as a kind of Ownership, most specifically, a model following the owners-as-accessors discipline, that is, Dynamic Ownership. Every time we perform activation on the same pair (talent & target), we gain the capability to access the associated subject, but only in some given scope. Let's assume that the system owns such subject, it can be leaked, but not accessed outside the owner's boundaries (this is the general definition for Dynamic Ownership). On such activation, the passed scope function can be thought like a client performing a borrowing (i.e, temporary ownership) before its own activation, and after that activation, the scope gives up from this temporary ownership, transferring back the ownership control to our system kernel. Boundaries here stands for all potential calls derived from the point of the boundary object (that is, inspecting the call stack, to be a valid operation, it must have the boundary object appearing in some point on such stack). Owners as accessors discipline is related to an Abstract Data Type in some sense -- the owner is the module and the owned is the data type.

Enough of cheap talking, to perform activation we will rather say:

local result = {
    talents.activate (talent, object, scope, ...)
}

Where scope will be often bound to something like that below:

local function scope (subject, ...)
    -- perform some actions on subject here --
end

Internally, it will lazily compute a talent application, the result of such application will be associated for both talent and target object, so further activations on these pairs will avoid unnecessary re-computation. Note that the lifetime of the associated subject depends on that pair, if some of these objects die, the subject will be dead as well (unless there are leaked references, but these will be pretty useless due the death of one needed key).

Clone this wiki locally