Database abstraction means that internally, data are not seen as records, tables and fields, but as objects we talk to in a unified way. Data repository (database back end servers) talk to the abstractor in their own dialect, the translation being done by an embedded engine. The abstractor translates this in the unified representation and present this unified view to the user.
When we need a data change, the user sends the unified view to the abstractor which translates it to the needed back end dialect and saves the data.
Sounds like ODBC, but it's not. While ODBC allows multiple database connections, one can only use very basic SQL queries as ODBC is only a connector, not an abstractor. ODBC does not rewrite and optimize queries, it only allows a unified way of connection to databases via DSN syntax.
LAB abstractor can use ODBC connection when no native client driver exist for some database backend. Then ODBC acts only as a client driver and the abstractor does its usual job. We thought this is the only way to have one toolkit usable everywhere and allowing database migration without changing even a single line in applications, apart the configuration file, in the database section (which can be done with a simple text editor in a few seconds).
We call this an abstract data schema. When compared to classic relational model, this abstract model does no more produce relations and constraints, but data associations acting as interconnected objects. This allows for discrete calculated fields, real time inter database conversions or direct integration of data, replacing lengthy batch processes.
Classic batch is replaced by a real time process where LAB Project kernel class is in charge to find what is for whom, rewrite queries accordingly, execute queries and present results to the user as if everything came from one physical database. However, this feature is not present in Free Download Edition, as it is only useful when dealing with Enterprise Data spread among various back end servers.
Abstract composite databases, while powerful, need an in depth knowledge of participating database servers as well as an in depth knowledge of schemas used by all participating databases. This is the main reason which leads us to provide this service activation only by particularly trained engineers and partners to prevent global enterprise disaster. Such a virtualization provides headaches as most enterprise information services grew through the years from nuts and bolts and need to be understood before interfacing the whole thing.
Data is no more tied to one database server. Those are only used as data repositories and not as data processors. We are conscious that such a power might be used by only a few companies using a whole bunch of incompatible pieces of software and needing lengthy night processing for data consolidation. But if one can more, one can least...
Such a mechanism, while complex in design, provides a flexibility degree unreachable by classic callback mechanisms. LAB Project tools are 100% based on signals and slots, be them abstract data managers or full fledged widgets.
In another view, callbacks have some drawbacks: they are not type safe and nobody can guarantee the callback will always be fed with the correct data. Then, callbacks imply a strong coupling between caller and receiver and they are not generalizable. Signal and slot mechanism is quite different. As signals and slots are predefined, with a strongly typed interface, only slots agreeing with the interface can respond to a signal. This is a very loosely coupled process, where signal emitter does not care who or what will accept the signal and process something upon it. This is the basic tool used for database virtualization.
Using this feature, LAB Project components can have very smart discussions and accept new components in the discussions at any time. This allows, by example, a user form to interact with underlying data abstraction without writing more code lines than clicking in the connect signals in the designer window.
With time, LAB Project will provide application designers with more and more connectible widgets.
That's the first reason why we developed a network and data wide security scheme based on sophisticated Access Lists. One can access to information if and only if some rights are granted in a certain context. A context can be anything from Ethernet or IP addresses to time of day, passing by terminal type... Providing SSL and IP tunneling being available in the network, they are used for all network conversations.
Aside this physical security, Access List Manager make data invisible except if the context allows you to access it.
LAB Project grants access to information based on Network Address, IP Address, User Group, User Login, Day of Week, Time In Day, Application Name, Data Field. This makes a matrix of eight times seventeen cases applied to each and every bit of information. Default mode is more relaxed, allowing people to read data, and preventing changes. You'll have to define your security scheme from this matrix. (This could be the hardest way).
Even if this scheme sounds paranoiac, it grants that submitted data are valid and the user who submitted it is granted for such an action. Purists could tell such a paranoia is a processor hog. In a sense, that's true, but think that a pinch of microseconds of processing are worth some thousands or millions losses when data is corrupted.