This document provides a specification of the ways in which the system interacts with users. See also the system's master QA document.
For an incomplete sketch of part of the API, look at the javadoc documentation.
The data structure definition is a single file which describes, in a form similar to a series of Java class declarations, the tables and fields which the program definitely expects to find in the database. If a field foo is declared in this file, it may be used in a type-safe way by the Java code, via automatically generated getFoo/setFoo method pairs; furthermore, the programmer has the opportunity to override those methods or add further operations in order to express the `business logic' of the `persistent classes' in a convenient and familiar way.
However, the data structure definition is not exclusive: other tables and fields may be present in the underlying RDBMS database, and they will be available for generic processing in dynamically generated reports and forms. These undefined fields can also be referred to by their literal names, albeit in a non-type-safe way, by the programmer if she does not wish to go to the trouble of putting them in the data structure definition.
The data structure definition is processed into a set of machine-generated Java files, including a Java schema representation whose job is to initialise (and subsequently check the consistency of) the running database when the Melati application is started up.
The following snippet shows part of the data structure definition for an invoicing system. It is followed by a key explaining what the various constructs mean.
table Invoice { (primary) int id; Date taxDate; (indexed) Party issuer; // a reference (unique) String number (maxlength = 10); (indexed) Party receiver; InvoiceLine.invoice Subset lines; // proposed for an owned list Textarea notes (width = 50, height = 5); } table InvoiceLine { (primary) int id; Product product (combo); (indexed) Invoice.id invoice; } (cachelimit = 1000) table Party { (primary) int id; (unique) String name; }
table | Each table declaration corresponds to one table in the underlying database and one Java class. The system can autogenerate both a base definition for the class, including transparent marshalling, and, optionally, the database table (using SQL CREATE TABLE and CREATE INDEX commands). | |
field definitions |
A table's fields are basically declared in the familiar
type name; format. For instance, the
date declaration in Invoice will give rise to a field
called date in the Invoice database table and a pair of
methods
Date getDate();in the Invoice class. `Attributes' specifying the indices required for each field and default display preferences are given in parentheses. |
|
(indexed) | If a table is specified as being indexed by a particular field, the index will be generated automatically when the data structure definition is processed. FIXME: Possibly some more sophisticated mechanism for passing SQL index-type parameters will be needed? | |
(unique) | It's also possible to specify that every record in the table must have a different value for a particular field. unique implies indexed. | |
(primary) | One field in each table must be designated as a primary key. The system uses this as an OID (object identifier) to help it manage the cache. The primary field need not necessarily be called id. By implication, it is indexed and unique, and the system takes care of setting and reading its value: the programmer will hardly ever have to use it explicitly. | |
references |
References (links) between objects---in RDB jargon, `one-to-one'
relationships between records---are specified just like string or numeric
fields, in the form
target-table name;The target field of a reference is always the target table's primary key. For instance, the issuer declaration in Invoice will give rise to a field issuer in the Invoice table which contains the primary id number of a Party record, and to a pair of methods Party getIssuer()in the Invoice class which deal directly in objects representing the linked Party: the necessary dereferencing happens transparently. |
|
owned lists |
FIXME:I think this is history but there is a way of
specifying how you want to maintain data integrity when a row is deleted. JimW
Owned lists of objects, similar to Java Vectors and expressed in
RDBs as `one-to-many' relationships between records, are specified in the
form
target-table.link-field Subset name;For instance, the lines declaration in Invoice will cause the system to check that InvoiceLine has an indexed field invoice, and give rise to a method Subset getLines();in the Invoice class, which returns an object behaving somewhat like a Vector (FIXME say more ...). |
|
(cachelimit = ...) | A limit can be placed on the number of records from each table which will be held in the cache. If omitted, it defaults to some suitably small number. |
Sometimes, it would be convenient to be able to embed sub-records inside a table row, rather than linking into a separate table. For instance, we might want to express a quantity of money in an arbitrary currency, by including a reference to the currency in question along with the numeric amount; logically, the two fields form a single unit of data, and could well be grouped into an object. At this stage, though, it's not clear that features for dealing cleanly with this situation would be sufficiently beneficial to offset the work required to implement them; furthermore, they would inevitably obfuscate the API to some extent, making it less Java-like, because in Java's memory model all compound structures are stored as independent `boxed' entities.
For some purposes, it might be nice to support inheritance between tables (as Postgres does). Getting the corresponding Java classes arranged in a hierarchy which mirrored that defined on the tables could probably be managed, albeit slightly untidily given the lack of multiple inheritance. This feature is not considered to be a priority for the moment.
Programmers write the data structure definition using their favourite text editor, just like the write Java code. They must then run a processor over the file in order to generate the Java base class definitions for the persistent classes and the database validation/initialisation code. The processor is written in Java so that any programmer wishing to use Melati will be able to compile and run it straightforwardly. Programmers who use make-like utilities can arrange for the processor to be run automatically when the data structure definition file is changed; however, it is not anticipated that this will happen very often, so manual intervention will not be a major chore.
Deciding how permissions are expressed in the API means making tradeoffs between flexibility, administrative convenience and implementational efficiency. At the moment, JAL supports arbitrary access control lists for records, templates and controllers, expressed in terms of user groups; exceptions to the default policy (world-readable and world-writeable) are stored in the userpermissions table, and queried by means of a three-table join along with userresourcetypes and userresourcetypes. Although this API is very flexible, it undoubtedly adds some overhead which we might, on the general principle that scalability can only be achieved by constant discipline, seek to avoid even though it's clear that it's not a problem right now. (Since the size of the ACL table may well scale linearly with that of the overall data set, it is probably not sensible to attempt to cache it.) Furthermore, in order to implement any given access policy, it's necessary for an administrator or administrative process to set up an appropriate ACL.
For Melati, it is proposed that we move to the following model:
void assertReadable(AccessToken token) throws AccessException;which throw some informative exception if token is not sufficient to permit reading/writing the record's fields.
void assertWriteable(AccessToken token) throws AccessException;
Capability readCapability;are defined for a table and are non-null in the record under consideration, the default (base-class) access-assertion methods check token against them explicitly. It's possible to define arbitrary permissions for an object, but only in terms of a single capability which is stored in the same table row as the record's actual data fields. An example of a capability might be `writeable by a trusted participant of the FooWeb project'. This scheme is similar to the supplementary group mechanism of the Unix filesystem, and (in fact) to Turbine's user/role/permission system.
Capability writeCapability;
public class Invoice extends InvoiceBase { public void assertReadable(AccessToken token) throws AccessException { if (token.getUser() != getIssuer() && token.getUser() != getReceiver()) super.assertReadable(token); } }
We are not interested in supporting generic field-specific access control, but special rules can be supported programmatically by overriding a class's setter/getter methods. For example, the following fragment would prevent changes to an invoice's taxDate after the invoice had been `finalised' (as determined by a method isFinalised which is here left undefined); the date could still be force-changed using a separate method, but a special capability would be required.
public class Invoice extends InvoiceBase { public void setTaxDate(Date date) { if (isFinalised()) throw new BlahException("rhubarb"); else super.setTaxDate(date); } public void setTaxDate_force(Date date) { if (!Implicit.accessToken().hasCapability(forceInvoiceDetails)) throw new BlahException("rhubarb"); else super.setTaxDate(date); } }
Another issue which has to be resolved is the question of when the low-level access control checks are performed. Two different models were considered:
The partly static method has the advantage that it uses the type system, to some extent, to help the programmer identify early on what level of access she needs to an object, and documents semi-automatically whether variables and method parameters hold references through which an object can possibly have its state changed. It is, in fact, analogous to the use of the const keyword in C/C++; and this should set off alarm bells, because const is controversial and has well-known downsides.
Perhaps most seriously, it can confuse novice programmers, because once you start using it, you have to use it consistently: you cannot cleanly call a non-const-aware library routine using a const-annotated handle.
Furthermore, handles with guaranteed permission levels do not fit well with Melati's access model, in which objects (rows) may require different access capabilities or implement programmatic access policies, and yet we want links to other objects to be resolved transparently. A programmer may have `guaranteed' read access to obj, but no promise can be made that obj.getFoo() is a readable handle to the linked foo until permissions have been checked. So the compile-time guarantee that no access exceptions will be thrown is vitiated even in simple cases.
For these reasons, and for simplicity (providing unbreakably read-only handles is quite complicated), we go with explicitly dynamic access checks. Note that checks still happen at a low level: posting guards on all the entry points to a Melati-based application is not strictly necessary for security.
The other main design decision for the access control API is how the identity of the user on whose behalf operations are being performed will be carried around. The options considered were:
String issuer = line.getInvoice(user).getIssuer(user).getName(user);Of course, provision could be made for the user info to be omitted if the programmer were willing to make the assumption that a field was ``world-readable''. However, this mechanism still goes against the aim of near-transparent persistency.
The thread-implicit technique seems to be the most convenient and transparent option for the programmer, given that the idea of a `current user' carrying implications for the capabilities of the running code is familiar from the process-ownership scheme implemented by all modern operating systems.
Implementation note. The ideal way of implementing a thread-implicit `effective user ID' would be to subclass java.lang.Thread so as to be able to associate the ID with each thread directly as a field; but this option isn't available without making a minor change to org.webmacro.broker.ResourceManager. Instead, it is proposed that the thread-user association be maintained via a hash table or (possibly ...) by manipulating the thread's name.
For some purposes, it will be necessary to allow users to perform, in a controlled manner, operations for which they would not usually have the necessary access permissions. For example, the production of relatively insensitive summary reports may involve scanning a number of individually secret records.
The example below sketches how anyone with read access to an invoice could be allowed to compute its total value even if they were not allowed to read its individual lines.
public class Invoice extends InvoiceBase { ... public long totalValue() { // Fail if we don't have read access to `the invoice'. assertReadable(Implicit.accessToken()); // If we do, force access to its constituent lines for this one operation. long value = 0L; Implicit.pushCapability(InvoiceLine.forceRead); try { for (Enumeration lines = getLines().elements(); lines.hasMoreElements();) value += ((InvoiceLine)lines.nextElement()).getAmount() } } finally { // To avoid our having to remember to do this, the enhanced-capability // operation could be wrapped up in a Runnable. Implicit.popCapability(); } return value; } ... } public class InvoiceLine extends InvoiceLineBase { ... // A capability used by Invoice.totalValue() // It's kept package-private in order to reduce the chance of leakage // leading to a more general access breach than intended. static final SettableCapability forceRead; ... public void assertReadable(AccessToken token) { if (!token.hasCapability(forceRead)) super.assertReadable(token); } ... }
Under thread-implicit, dynamic, group-capability access control, a persistent object behaves very like a file: you can legally attempt any defined operation on it, but if the user in whose name you are running is not a member of a group with an appropriate capability, an exception will be thrown following an (almost) indefeasible low-level check. Bypassing record permissions in order to support a particular operation is like setting an effective user ID for a particular utility program.
JAL's security model currently relies on restricting access to Webmacro handlers and templates. There is no reason why Melati's capabilities model should not be used to support access control tests buried in the HandlerProvider and TemplateProvider supplied to Webmacro. But it's probably better just to have handlers examine the user's capabilities for themselves. The following fragment shows how a handler for a generic record-editing service might do this:
// Fetch the record specified in the form String tableName = (String)context.getForm("table"); int recordNum = Integer.parseInt((String)context.getForm("id")); Record record = database.table(tableName).record(recordNum); try { // Fail if we can't read it record.assertReadable(Implicit.accessToken()); // Fine, return the editing template ... } catch (AccessException e) { // Take appropriate action, e.g. returning a login template ... }
NB in Melati, the worst that can happen if the checks are left too late is that the user gets an error message generated by the low-level persistent store after filling in and submitting a form.
One of the requirements for Melati is that it should support transactions (and that its data cache should remain consistent even when transactions are pending or get cancelled). Integrating transactions with the API under which database records appear as transparently persistent objects poses the same problems as did the notion of the `current user': there has to be some way for the persistent store to know which transaction a data access (NB read as well as write!) is meant to belong to; but to require the programmer to pass a Connection handle into every call would spoil the illusion and degrade the simplicity of the interface.
It is, however, anticipated that in nearly all cases, the pattern in which transactions are used will be very simple: for each incoming HTTP request, begin a new transaction; if an exception is thrown during processing, roll it back, but on successful completion, commit it. So it makes sense to adopt a model in which the `current transaction' is associated with the execution thread, just as it is proposed that the `current user' should be. The idea should be familiar from single-threaded SQL monitors like psql. If the transaction is set up---along with the user ID---before any of the programmer's code runs, and a trap is put in place to cancel it if an exception condition occurs, then the right thing will generally happen automatically without the programmer having to think about it.
Explicit checkpointing (committing) is also available, and if the programmer needs to perform some subtask in the context of a different transaction, she can do so with the following idiom:
Session otherSession = ...; ... Implicit.inSession(otherSession, new Runnable { public void run() { // do the subtask } }); ...
It goes without saying that behind the implicit transaction mechanism, Melati will support `connection pooling'. Implementation note: perhaps Sun's new pooling utility will be suitable.
A record identified by its primary key can be called up from the persistent store (cache or DBMS) by invoking a method on its table:
Invoice inv = database.invoiceTable().invoiceRecord(234);
Implementation note. The underlying SELECT used to retrieve identified or linked records by primary key is a cached PreparedStatement.
It's possible to ask for a SELECTion of objects from a table via its selection method. We may eventually want to support some minimally complicated way of constructing these queries without embedding literal SQL in the code; for instance:
Enumeration them = invoiceTable.selection( Filter.like(Invoice.NUMBER, "123%"));
A sufficiently powerful `meta-language' of that kind should be able to support queries which automatically include the joins necessary to resolve references between objects. But there may well be little need for that feature.
The programmer can also run arbitrary SELECT queries on the database; the result will not be a stream of objects (so that e.g. any overriding of getter methods will be ignored) and will not be cached, but it ought to be possible to present it in a more friendly form than a ResultSet---perhaps an Enumeration of Field objects which can trivially be turned into appropriate markup in the template.
For the moment it is not proposed that we support partial retrieval of records, i.e. specifying which fields should be uploaded from the database now (if they aren't cached) and leaving others to be loaded on demand. This might save a little memory and IPC, and possibly disk accesses on the DBMS side if the records were very big, but it's probably not worth it.
Updates to records are supported transparently via the corresponding objects' setter methods. By default, the invocation of any single setter method will result in an immediate UPDATE command being issued to the DBMS (although the change will not, of course, be visible outside the current transaction). Since that behaviour is inefficient if one wants to change a number of fields at once, we may want to provide a way of batching updates into a single DBMS command.
At the simplest, this is method pair record.cacheModifications(), which causes modifications to an object to stay in the data cache only, and .writeModifications(), which causes cached and future changes to be written down immediately as usual. The problem is that you have to remember to turn write-down back on (and also the cache is slightly out of sync with the results you will get from SELECTs).
So we wrap those in a record.apply method, which you use as follows:
invoice.apply(new InvoiceUpdater { public void update(Invoice invoice) { invoice.setTaxDate(taxDate); invoice.setNotes(notes); ... } });
But the most common situation in which a multi-field update is required is when reading values in from a form, and that is handled automatically (and atomically); the apply idiom will almost always always be unnecessary.
FIXME must support transactions and cacheing of whole subsets. Transactions are handled by copying an object's underlying array of fields into a session-private cache when it is modified. An easy, though possibly expensive, solution for subsets would be to copy the whole list of members into the session cache.
The fields attached to persistent objects are associated with rich typing and display preference information, which is used for creating displays and input boxes for their values in whatever markup language the template is written in, and for generating javascript validation routines for those inputs.
Clearly there is a necessary distinction between abstract type/style information and the markup-specific way in which it is used (the latter being encapsulated in an object representing the markup language). Another possible cut is between types strictly so defined and display preferences, but it's not clear what would be gained by separating them into two, so it is proposed that the both should be encoded in a single hierarchy (now in org.melati.poem).
Unlike in JAL, it is proposed that field values should not, in general, be stored and passed around with full type information attached, but instead as plain Java Strings, ints and so on. If the programmer needs to know more about the values than is evident from their Java types---which she mostly will not---she has to call a different method:
String notes = invoice.getNotes(); TextType notesTypes = invoice.table().getNotesType();
The advantages claimed for this approach are a small gain in efficiency, since the values returned by getter methods can be slightly smaller and quicker to construct, and an improvement in transparency for the programmer: she can deal directly in familiar Java types.
However, we will probably also want to provide convenience methods for packaging a value and a type/style together in a form in which they can be used to generate markup concisely in templates.
One of the aims for Melati is to tidy up JAL's facility for generating HTML for form elements corresponding to record fields, with a view to making it easier to understand, and capable of extension to work with WML and XML, and, perhaps, non-SGML-derived languages such as plain text (for emails) or something suitable for input to a PDF generator.
FIXME: this is in fact probably NOT how we will do it; we've realised that calling up mini subtemplates for controls is a much better idea! Embedding HTML (or whatever) in the Java is just wrong, even if it's wrapped in some library.
The main issue to be resolved in the design of the new system is: how much commonality of structure do we assume between the target languages?
It is proposed that the we should go with the first option, for the following reasons:
JAL's mechanism for inserting Javascript fragments which perform client-side validation of form fields works by
This mechanism can be adopted unchanged, along with all the existing Javascript code, by Melati if it is made part of the HTML MarkupLanguage. It is proposed that the script fragment simply be included along with the markup for each <INPUT> so that it does not have to be mentioned explicitly in the template; the inclusion of the trigger in the submit button should be made transparent in a similar way.
MarkupLanguages will provide template authors with easy-to-use facilities for inserting markup which renders field values (which need, for instance, to be escaped in a manner appropriate to the target language) and input controls.
FIXME: this is in fact probably NOT how we will do it; we've realised that calling up mini subtemplates for controls is a much better idea! Embedding HTML (or whatever) in the Java is just wrong, even if it's wrapped in some library.
The following example shows how part of a template for displaying an invoice might look.
#set $ml = $jal2.HTMLMarkupLanguage ... <P>Invoice number: $ml.display($invoice.NumberField)<P> <P>Tax date: $ml.display($invoice.TaxDateField)</P> <P>Colour: $ml.displayColourSample($invoice.ColourField)</P> <TABLE> #foreach $line in $invoice.Lines { <TR> <TD>$ml.display($line.Product.CodeField)</TD> <TD>$ml.display($line.Product.DescriptionField)</TD> <TD>$ml.display($line.AmountField)</TD> </TR> } </TABLE>
At the top of the template is a directive for obtaining an HTML renderer $ml which is then used explicitly to display each field. FIXME: It might be possible to make the markup language a (thread-) global setting like the current user and current transaction---need to check what is possible in webmacro's syntax. The labels NumberField, TaxDateField, ... are used in place of Number, TaxDate, ... to retrieve both value and type/style information simultaneously (see above).
Note the use of the displayColourSample method to force a field to be displayed in a particular form: it's entirely open to the template writer to use language-specific special rendering techniques, because, of course, she knows what language she is writing for.
Pulling the items out of the invoice is trivial: the template writer can simply invoke its getLines to obtain an enumerable container with the appropriate objects in it.
Forms for named fields are handled in a similar way (FIXME this is impressionistic at the moment); recall that validation snippets are included along with the markup for each input:
#set $ml = $jal2.HTMLMarkupLanguage $ml.BodyInclusions <!-- get the javascript header in --> ... <P>Invoice number: $ml.input($invoice.NumberField)<P> <P>Tax date: $ml.input($invoice.TaxDateField)</P> <INPUT TYPE=submit value=Update name=Update $ml.SubmitButtonAttributes>
Templates for applications like the admin system are written in a similar style their JAL equivalents:
#set $ml = $jal2.HTMLMarkupLanguage $ml.BodyInclusions <!-- get the javascript header in --> ... <TABLE> # foreach $field in $object { <TR> <TD>$ml.label($field)</TD> <TD>$ml.input($field)</TD> </TR> } </TABLE> <INPUT TYPE=submit value=Update name=Update $ml.SubmitButtonAttributes>
There is a really hair-raising list of things that have to be done before a JAL application can be delivered. The following is a summary taken from the JAL Installation Guide:
Melati can carry out the very last step automatically by running CREATE TABLE and CREATE INDEX commands determined from the data structure definition---assuming that Postgres thinks the installer has database-creation rights (current JAL applications provide a psql-based script for this purpose). However, the other steps would be exceedingly difficult to automate in a way which would dovetail with an existing setup on a customer's machine: the only viable means of coexisting with their settings would be to use the API of a configuration tool like linuxconf (but even linuxconf doesn't know about e.g. mod-jserv). That leaves two possible solutions, which we could offer as alternatives:
umm, noone is pretending it is straightforward, but i think it is within the capablities of all webmacro users. NB ISPs such as ednet (and others that host servlets) provide an environment with Linux, apache, Postgres, Java, JSDK, leaving the user just to messabout with classpaths.
Most of the installation will have to be carried out from a command prompt on Linux; the best interface for NT/W2K will be decided when the port is made. However the creation of a Melati application database could be carried out by pressing a button on the web admin interface.
The generic admin system looks essentially identical to JAL's existing screens. New database fields and even tables can be added, and will be available for use in templates and generic report/data entry screens: the data structure definition is not exclusive.
Field display preferences set in the data structure definition---canonically, the default height of a TEXTAREA---can be adjusted freely by the administrator; the system will never `change them back', because the system only ever adds fields in the DSD which are missing from the running database. FIXME: TimJ points out that this could be confusing: ``I changed my DSD and regenerated, but my text area is still the same size''. It is confusing, and we need a warning message, but the only alternative is to remove preference information to a separate file, and that would detract from the appealing conciseness of the notation.
The administrator is not allowed to change the basic type of any field, e.g. from INT to FLOAT or from VARCHAR(10) to VARCHAR(11) (Postgres doesn't support this). She can delete a field (and add it again in a different form), or rename it, provided that it was not declared in the data structure definition; whenever such a change is made (which is assumed to be seldom), the data cache is cleared of records from the table in question, because otherwise the persistent store would have to cope with multiple versions of a table's shape.
Administrators access Melati's services over a web interface. Can we use secure transport for sensitive purposes?
Login and password management look essentially identical to JAL's existing screens.
Users access Melati's services over a web interface. Can we use secure transport for sensitive purposes?
The ways in which users can achieve the goals expected of them by navigating through the system are:
Complete, working examples to follow here eventually.
The external circumstances which are essential to the correct and reliable operation of the system are:
The obvious ways in which this specification might turn out to be poor are:
The obvious ways in which the implementation of this specification might fail are:
William Chesters <williamc@paneris.org>
Most recent CVS $Author$ @paneris.org
The current quality level of this document is: Alpha. There are decisions yet to be made, sections to be filled out and some additions to come (including more examples). Some of the content would fit better in the Requirements Specification.
The important points in the life of this document are listed below.
Date | Event |
---|---|
(not yet) | Certified at release quality level by ... |
The CVS log for this document is:
$Log$
Revision 1.1 2005/11/21 22:01:49 timp
Moved from site/doc
Revision 1.15 2003/03/04 22:01:47 jimw
Removed some broken links and a few misleading historical details.
Revision 1.14 2002/12/29 09:23:55 jimw
Removed doc from doc/examples
Revision 1.13 2000/10/26 05:53:46 timj
remove documentation of unique with
Revision 1.12 2000/07/27 18:39:45 timp
Make CVS links work
Revision 1.11 2000/02/29 09:53:02 williamc
Finish recovering from disaster; point out in the docs that you can 'add methods to table rows'
Revision 1.2 2000/02/04 18:28:34 williamc
Add QA stub; explain DSD-admin interactino a little better