Sunday, October 23, 2016

mJson 1.4.0 Released

It is with great pleasure that I hereby announce the official release of the mJson 1.4.0. There have been a few improvements and bug fixes which I list and explain, whenever explanations are needed, below. The next release will be 2.0, marking a move to Java 8 (a.k.a. the Scala killer).

I am also announcing a new sibling project: mjson-ext at https://github.com/bolerio/mjson-ext. The mjson-ext project will be sprinkled with modules implementing extras, including Java Beans serialization, integration with MongoDB, support for native JavaScript object in embedded browsers and whatever else comes up. If you have some alternative implementation of the mJson interfaces that covers your specific use case or you have some interesting module, I'd be grateful if you chose to share it so that others can benefit. Thanks!

Release Notes 

So, here is what mJson 1.4.0 is about.
  • Side-effect free property accessor - this is a case of a bad initial design, corrected after some soul searching. The property accessor is the Json.at(name, defaultValue) method, which returns the value of a property in a JSON object or the provided default value in case the property is absent. Prior to this version and when the object had no property with the given name, the accessor will not only return the default value, but it would also modify the object with set(name, defaultValue). The idea behind that decision is that the default value would be the same in a given context and whatever part of the code knows it should set it for other parts to peruse. And this works elegantly in many situations. But, as with all side effects, the charm of the magic eventually gave way to the dark forces of unexpected behaviors. In this particular case, JSON objects persisted in a database would be systematically polluted with all sorts of defaults that (1) make the objects bigger than necessary and (2) are actually invalid the next time around. And in general, there are cases where the default is valid in one context of use but not another. In sum, I can't believe I made this stupid decision. But I did. And now I'm taking it back. Apologies for the backward-subtly-incompatible improvement.
  • java.io.Serializable - The Json class is now java.io.Serializable so that applications relying on Java serialization don't hit a wall trying to use Json within a serializable class. This is useful for example when using mJson in a distributed environment that relies on serialization such as Apache Spark.
  • Parent references - better handling of enclosing/parent Json. In case you missed it, mJson keeps track of the parent of each JSON element. And there is an up() function to access it. That works well when there is always a single parent and the JSON structure is a mere tree. But when we have a more complex DAG structure, there can be multiple parents. This version handles this case better - the parent Json of an element will be an array in case there are multiple parents. Naturally, since an array member will have a Json array as its parent, that causes ambiguity with multiple parents: is my parent an array that is part of my structure, or do I have multiple parents? At the moment, mJson does not resolve said ambiguity for you, it's your own responsibility to manage that.
  • Merging - when you have two JSON objects X and Y and you want to copy Y ('s properties) into X, there are a lot of potential use cases pertaining to the copy process. It's a bit like cloning a complex structure where you have to decide if you want a deep or a shallow clone. It's of consequence for mutable structures (such as Json objects or arrays) - a deep copy guarantees that mutating the clone won't affect the source, but is of course more costly to do. In the case of merging, you have to also be mindful of what's already in X - do you want to overwrite at the top-level only (shallow) or do you want to recursively merge with existing properties of X. If X and Y are arrays, the use cases are more limited, but one important one is when the arrays are sorted and we want the resulting array to maintain order. All this is covered in an improved Json.with method that accepts a set (to be expanded, of course based on your feedback, dear user) of options driving which part of the tree (ahem, dag) structure is to be cloned or not, merged or sorted or not. Take a look at https://github.com/bolerio/mjson/wiki/Deep-Merging.
  • Json.Schema - processing JSON schemas may involve resolving URIs. If all is good and all URIs actually point to schemas accessible on the web, things work. But often those URIs are mere identifiers and the schemas are elsewhere thus making it impossible for mJson to load a schemas that makes use of other schemas that make use of other schemas that may or may not make use of the one before, ok I stop. Hence, from now on you can roll your own URIs resolution by providing a Json.Function - yes, the same stuff Java 8 has, but we are staying with Java 7 for this release so we have a soon to disappear special interface just for that.
  • ...and of course there are some bug fixes, added OSGI packaging and previously private method now exposed, namely the Json.factory() method.
I love this little project. Always get positive feedback from its small user community. I will therefore share some extra goodies soon in the mjson-ext project and write some other fun functions in mJson itself to help with navigation and finding stuff.

Cheers,
Boris

Wednesday, December 30, 2015

Hbase, Spark and HDFS - Setup and a Sample Application

Apache Spark is a framework where the hype is largely justified. It is both innovative as a model for computation and well done as a product. People may be tempted to compare with another framework for distributed computing that has become popular recently, Apache Storm, e,g. with statements like "Spark is for batch processing while Storm is for streaming". But those are entirely different beasts. Storm is a dataflow framework, very similar to the HyperGraph DataFlow framework for example, and there are others like it, it's based on a model that has existed at least since the 1970s even though its author seem to be claiming credit for it. Spark on the other hand is novel approach to deal with large quantities of data and complex, arbitrary computations on it. Note the "arbitrary" - unlike Map/Reduce, Spark will let you do anything with the data. I hope to post more about what's fundamentally different between something like Storm and Spark because it is interesting, theoretically. But I highly recommend reading the original paper describing RDD (Resilient Distributed Dataset), which is the abstraction at the foundation of Spark.

This post is just a quick HOWTO if you want to start programming against the popular tech stack trio made up of HDFS (the Hadoop Distirbuted File System), HBase and Apache Spark.  I might follow with the intricacies of cluster setup some time in the future. But the idea here is to give you just a list of minimal steps that will work (fingers crossed). Whenever I mention relative file paths, they are to be resolved within the main installation directory of the component I'm talking about: if I'm talking about Hadoop and I write 'etc/hadoop', I mean 'etc/hadoop' under your Hadoop installation, not '/etc/hadoop' in your root file system!

All code, including sample data file and Maven POM can be found at the following Github GIST: https://gist.github.com/bolerio/f70c374c3e96495f5a69

HBase Install

We start with HBase because in fact you don't even need Hadoop to develop locally with it. There is of course a standard package for HBase and the latest stable release (at the time of this writing) is 1.1.2. But that's not going to do it for us because we want Spark. There is an integration of Spark with HBase that is being included as an official module in HBase, but only as of the latest 2.0.0 version, which is still in an unstable SNAPSHOT state. If you are just starting with these technologies you don't care about having a production blessed version. So, to install HBase, you need to build it first. Here are the steps:
  1. Clone the whole project from GIT with:
         git clone https://github.com/apache/hbase.git
  2. Make sure you have the latest Apache Maven (3.0.5 won't work),  get 3.3+, if you don't already have it.
  3. Go to the Hbase project directory and build it with:
         mvn -DskipTests=true installThat will put all hbase modules in your local maven repo, which you'll need for a local maven-based Spark project.
  4. Then create an hbase distribution with:
        mvn -DskipTests=true package assembly:single
    You will find tarball under hbase-assembly/target/hbase-2.0.0-SNAPSHOT-bin.tar.gz.
  5. This is what you are going to unpack in your installation directory of choice, say /opt/hbase.
  6. Great. Now you have to configure your new hbase installation (untar of the build you just created). Edit the file /opt/hbase/conf/hbase-site.xml to specify where Hbase should store its data. Use this template and replace the paths inside:
    <configuration>
      <property>
        <name>hbase.rootdir</name>
        <value>file:///home/boris/temp/hbase</value>
      </property>
    </configuration>
    
  7. Make sure you can start HBase by running:
         bin/start-hbase.sh (or .cmd)
  8. Then start the HBase command line shell with:
       bin/hbase shell
    and type the command list to view the list of all available tables - there will be none of course.
Spark Development Setup

Spark itself is really the framework you use to do your data processing. It can run in a cluster environment, with a server to which you submit jobs from a client. But to start, you don't actually need any special downloads or deployments. Just including Maven dependencies in your project will bring the framework in, and you can call it from a main program and run it in a single process.

So assuming you built HBase as explained above, let's setup a Maven project with HBase and Spark. Then we'll get some sample data to play with and go over a sample application that makes use of two different approaches in Spark: the plain API and the SQL-like Spark module which essentially gives a scalable, distributed SQL-query engine, but we won't talk about it here, in part because the HBase integration of that module is sort of undocumented work in progress. However, you can see an example in the sample code provided in the GIST!

No point repeating the code in its entirety here, but I will show the relevant parts. The Maven pom only needs to contain the hbase-spark dependency:

<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-spark</artifactId>
    <version>${hbase.version}</version>
</dependency>

That will pull in everything you need to load that into a local HBase instance and process it with Spark. That Spark SQL module I mentioned is a separate dependency only necessary for the SQL portion of the example:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_2.10</artifactId>
    <version>1.5.1</version>
</dependency>

There are two other auxiliary dependencies of the sample project: one for CSV parsing and the mJson library, which you can see in the pom.xml file from the Github GIST.

Playing with Some Data

We will do some processing now with some Open Government data, specifically from Miami-Dade County's list of recently arrested individuals. You can get it from here: https://opendata.miamidade.gov/Corrections/Jail-Bookings-May-29-2015-to-current/7nhc-4yqn - export in Excel CSV format as a file named jaildata.csv. I've already made the export and placed in the GIST even though that's a bit of a large file. The data set is simple: it contains arrests for a big part of the year 2015. Each record/line has the date the arrest was made, where it was made, the name of the offender, their date of birth and a list of up to 3 charges. We could for example find out how popular is the "battery" charge for offenders under 30 compared to offenders above 30.

First step is to import the data from the CSV file to HBase (note that Spark could work very well directly from a CSV file). This is done in the ImportData.java program where in a short main program we create a connection to HBase, create a brand new table called jaildata, then loop through all the rows in the CSV file to import the non-empty ones. I've annotated the source code directly. The connection assumes a local HBase server running on a default port and that the table doesn't exist yet. Note that data is inserted as a batch of put operations, one per column value. Each put operation specifies the column family, column qualifier and the value while the version is automatically assigned by the system. Perhaps the most important part in that uploading of data is how the row keys are constructed. HBase gives you complete freedom to create a unique key for each row inserted and it's sort of an artform to pick a good one. In our case, we chose the offender's name combined with the date of the arrest, the assumption being of course that the same person cannot be arrested twice in the same day (not a very solid assumption, of course).

Now we can show off some Spark in action. The file is ProcessData.java, which is also annotated, but I'll go over some parts here.

We have a few context objects and configuration objects that need to be initialized:

  private void init()
  {
      // Default HBase configuration for connecting to localhost on default port.
      conf = HBaseConfiguration.create();
      // Simple Spark configuration where everything runs in process using 2 worker threads.
      sparkconf = new SparkConf().setAppName("Jail With Spark").setMaster("local[2]");
      // The Java Spark context provides Java-friendly interface for working with Spark RDDs
      jsc = new JavaSparkContext(sparkconf);
      // The HBase context for Spark offers general purpose method for bulk read/write
      hbaseContext = new JavaHBaseContext(jsc, conf);
      // The entry point interface for the Spark SQL processing module. SQL-like data frames
      // can be created from it.
      sqlContext = new org.apache.spark.sql.SQLContext(jsc);
  }


A configuration object for HBase will tell the client where the server is etc., in our case default values for local server work. Next line, the Spark configuration gives it an application name and then it tells it where the main driver of the computation is - in our case, we have a local in-process driver that is allowed to use two concurrent threads. The Spark architecture comprises a main driver and then workers spread around in a cluster. A location of a remote main driver might look something like this: spark://masterhost:7077 (instead of "local[2]"). Then we create a Spark context, specifically a JavaSparkContext because the original Scala API is not so easy to work with from Java - possible, just not very user-friendly. Then we create something called a JavaHBaseContext which comes from the HBase-Spark module and it knows how talk to an HBase instances using the Spark data model - it can do bulk inserts, deletes, it can scan an HBase table as a Spark RDD and more. Finally, we create a context object representing an SQL layer on top of Spark data sets. Note that the SQL context object does not depend on HBase. In fact, different data sources can be brought under the same SQL processing API as long as there is some way to "relationalize" it. For example, there is a module that lets you process data coming from MongoDB as Spark SQL data as well. So in fact, you can have a federated data environment using a Spark cluster to perform relational joins between MongoDB collections and HBase table (and flat files and ...).    

Now, reading data from HBase is commonly done by scanning. You can perform some filtering operations, but there's no general purpose query language for it. That's the role of Spark and other frameworks like Apache Phoenix for example. Also, scanning HBase rows will give you binary values which need to be converted to the appropriate runtime Java type. So for each column value, you have to manage yourself what its type is and perform the conversion. The HBase API has a convenience class named Bytes that handles all basic Java types. To represent whole records, we use JSON as a data structure so individual column values are first converted to JSON values with this utility method:

  static final  Json value(Result result,  String family, String qualifier, BiFunction converter)
  {
      Cell cell = result.getColumnLatestCell(Bytes.toBytes(family), Bytes.toBytes(qualifier));
      if (cell == null)
          return Json.nil();
      else
          return Json.make(converter.apply(cell.getValueArray(), cell.getValueOffset()));
  }

Given an HBase result row, we create a JSON object for our jail arrests records like this:

static final Function tojsonMap = (result) -> {
    Json data = Json.object()
        .set("name", value(result, "arrest", "name", Bytes::toString))
        .set("bookdate", value(result, "arrest", "bookdate", Bytes::toString))
        .set("dob", value(result, "arrest", "dob", Bytes::toString))
        .set("charge1", value(result, "charge", "charge1", Bytes::toString))
        .set("charge2", value(result, "charge", "charge2", Bytes::toString))
        .set("charge3", value(result, "charge", "charge3", Bytes::toString))
    ;
    return data;
};

With this mapping from row binary HBase to a runtime JSON structure we can construct a Spark RDD for the whole table as JSON records:

JavaRDD<Json> data = hbaseContext.hbaseRDD(TableName.valueOf("jaildata"), new Scan())
                                       .map(tuple -> tojsonMap.apply(tuple._2()));

We can then filter or transform that data anyway we want. For example:

data = data.filter(j -> j.at("name").asString().contains("John"));

would gives a new data set which contains only offenders named John. An instance of JavaRDD is really an abstract representation of a data set. When you invoke filtering or transformation methods on it, it will just produce an abstract representation of a new data set, but it won't actually compute the result. Only when you invoke what is called an action method, that has to return something different than an RDD as its value, the lazy computation is triggered and an actual data set will be produced. For instance, collect and count are such action methods.

Ok, good. Running ProcessData.main should output something like this:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/29 00:41:00 INFO Remoting: Starting remoting
15/12/29 00:41:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.0.13:55405]
Older 'battery' offenders: 2548, younger 'battery' offenders: 1709, ratio(older/younger)=1.4909303686366295
Thefts: 34368
Contempts of court: 32

Hadoop

To conclude, I will just show you how to use Hadoop/HDFS to store HBase data instead of the normal OS filesystem. First, download Hadoop from https://hadoop.apache.org/releases.html. I used version 2.7.1. You can unzip the tarball/zip file in a standard location for your OS (e.g. /opt/hadoop on Linux).  Two configuration steps are important before you can actually start it:
  1. Point it to the JVM. It needs at least Java 7.  Edit etc/hadoop/hadoop-end.sh (or hadoop-env.cmd) and change the line export JAVA_HOME=${JAVA_HOME} to point the Java home you want, unless your OS/shell environment already does.
  2. Next you need to configure where Hadoop will store its data and what will be the URL for clients to access it. The URL for clients is configured in the etc/hadoop/core-site.xml file:
    <configuration>
     <property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
     </property>
    </configuration>
     
    The location for data is in the etc/hadoop/hdfs-site.xml file. And there are in fact two locations: one for the Hadoop Namenode and one for the Hadoop Datanode:
    
    <configuration>
      <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///var/dfs/name</value>
      </property>
      <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///var/dfs/data</value>
      </property>
    </configuration>
  3. Before starting HDFS, you need to format the namenode (that's the piece the handles file names and directory names and knows what is where) by doing
          bin/hadoop namenode -format
  4. When starting hadoop, you only need to start the hdfs file system by running
          sbin/start-dfs.shMake sure it has permissions to write to the directories you have configured.
  5. If after hadoop has started, you don't see the admin web UI at http://localhost:50070, check the logs.
  6. To connect HBase to hadoop you must change the hbase root directory to be an HDFS one:
      <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000/hbase</value>
      </property>
    
  7. Restarting HBase now will bring you back with an empty database as you could verify on the hbase shell with the 'list' command. So try running the import program and then the "data crunching" program see what happens.

Wednesday, November 11, 2015

Announcing Seco 0.6 - Collaborative Scripting For Java

This is a short post to announce an incremental 0.6 release of Seco. The release comes with important bug fixes and a simplified first-time user experience.


Seco is a collaborative scripting development environment for the Java platform. You can write code in many JVM scripting languages. The code editor in Seco is based on the Mathematica notebook UI, but the full GUI is richer and much more ambitious. In a notebook, you can mix rich text with code and output, including interactive components created by your code. This makes Seco into a live environment because you can evaluate expression and immediately see the changes to your program.


Monday, August 31, 2015

Scheduling Tasks and Drawing Graphs — The Coffman-Graham Algorithm

When an algorithm developed for one problem domain applies almost verbatim to another, completely unrelated domain, that is the type of insight, beauty and depth that makes computer science a science on its own, and not a branch of something else, namely mathematics, like many professionals educated in the field mistakenly believe. For example, one of the common algorithmic problems during the 60s was the scheduling of tasks on multiprocessor machines. The problem is, you are given a large set of tasks, some of which depend on others, that have to be scheduled for processing on N number of processors in such a way as to maximize processor use. A well-known algorithm for this problem is the Coffman-Graham algorithm. It assumes that there are no circular dependencies between the tasks, as is usually the case when it comes to real world tasks, except in catch 22 situations at some bureaucracies run amok! To do that, the tasks and their dependencies are modeled as a DAG (a directed acyclic graph). In mathematics, this is also known as a partial order: if a tasks T1 depends on T2, we say that T2 preceeds T1, and we write T2 < T1. The ordering is called partial because not all tasks are related in this precedence relation, some are simply independent of each other and can be safely carried out in parallel.

The Coffman-Graham algorithm works by creating a sequence of execution rounds where at each round at most N tasks execute simultaneously. The algorithm also has to make sure that all dependencies of the current round have been executed in previous rounds. Those two constraints are what makes the problem non-trivial: we want exactly N tasks at each round of execution if possible, so that all processors get used, and we also have to complete all tasks that precede a given task T before scheduling it. There are 3 basic steps to achieving the objective:


  1. Cleanup the graph so that only direct dependencies are represented. So if there is a task A that depends on B and B depends on another task C, we already know that A depends “indirectly” on C (transitivity is one of defining features of partial orders), so that dependency does not need to be stated explicitly. Sometimes the input of a problem will have such superfluous information, but in fact this could only confuse the algorithm! Removing the indirect dependencies is called transitive reduction, as opposed to the more commonly operation of transitive closure which explicitly computes all indirect dependencies.
  2. Order the tasks in a single list so that the dependent ones come after their dependencies and they are sort of evenly spread apart. This is the crucial and most interesting part of the algorithm. So how are we to compare two tasks and decide which one should be run first. The trick is to proceed from the starting tasks, the roots of the graph that don’t have any dependencies whatsoever, and then progressively add tasks that depend only on them and then tasks then depend only on them etc. This is called topological ordering of the dependency graph. There are usually many possible such orderings and some of them will lead to a good balanced distribution of tasks for the purpose of CPU scheduling while others will leave lots of CPUs unused. In step (3) of the algorithm, we are just going to take the tasks one by one from this ordering and assign them to execution rounds as they come. Therefore, to make it so that at each round, the number of CPUs is maximized, the ordering must somehow space the dependencies apart as much as possible. That is, if the order is written as [T1, T2, T3, …, Tn] and if Tj depends on Ti, we want j-i to be as big as possible. Intuitively, this is desirable because the closer they are, the sooner we’d have to schedule Tj for execution after Ti, and since they can’t be executed on the same parallel round, we’d end up with unused CPUs. To space the tasks apart, here is what we do. Suppose we have tasks A and B, with all their dependencies already ordered in our list and we have to pick which one is going to come next. From A’s dependencies, we take the one most recently placed in the ordering and we check if it comes before or after the most recently placed task from B’s dependencies. If it comes before, then we choose A, if it comes after then we chose B. If it turns out A and B’s most recently placed dependency is actually the same task that both depend on, we look at the next most recent dependency etc. This way, by picking the next task as the one whose closest dependency is the furthest away, at every step we space out dependencies in our ordering as much as possible.
  3. Assign tasks to rounds so as to maximize the number of tasks executed on each round. This is the easiest step - we just reap the rewards from doing all the hard work of the previous steps. Going through our ordering [T1, T2, T3, …, Tn], we fill up available CPUs by assigning the tasks one by one. When all CPUs are occupied, we move to the next round and we start filling CPUs again. If while at a particular round the next task to be scheduled has a dependency that’s also scheduled for that same round, we have no choice but to leave the remaining CPUs unused and start the next round. The algorithm does not take into account how long each tasks can potentially take.
Now, I said that this algorithm is also used to solve a completely different problem. The problem I was referring to is drawing networks in a visually appealing way. This is a pretty difficult problem and there are many different approaches whose effectiveness often depends on the structure of the network. When a network is devoid of cycles (paths from on node back to itself), the Coffman-Grahan algorithm just described can be applied!
The idea is to think of the network nodes as the tasks and of the network connections as the dependencies between the tasks, and then build a list of consecutive layers analogous to the task execution rounds. Instead of specifying a number of available CPUs, one specifies how many nodes per layer are allowed, which is generally convenient because the drawing is done on a fixed width computer screen. Because the algorithm does not like circular dependencies, there is an extra step here to remove a select set of connections so that the network becomes a DAG. This is in addition to transitive reduction where we only keep direct connections and drop all the rest. Once the algorithm is complete, the drawing of those provisionally removed connections can be performed on top of the nice layering produced. Thus, the Coffman-Graham is (also!) one of hierarchical drawing algorithms, a general framework for graph drawing developed by Kozo Sugiyama.

Tuesday, July 28, 2015

HyperGraphDB 1.3 Released

Kobrix Software is pleased to announce the release of HyperGraphDB 1.3.

This is a maintenance release containing many bugs fixes and small improvements. Most of the efforts in this round have gone towards the various application modules built upon the core database facility.

Go directly to the download page.

HyperGraphDB is a general purpose, free open-source data storage mechanism. Geared toward modern applications with complex and evolving domain models, it is suitable for semantic web, artificial intelligence, social networking or regular object-oriented business applications.
This release contains numerous bug fixes and improvements over the previous 1.2 release. A fairly complete list of changes can be found at the Changes for HyperGraphDB, Release 1.3 wiki page.

HyperGraphDB is a Java based product built on top of the Berkeley DB storage library.

Key Features of HyperGraphDB include:
  • Powerful data modeling and knowledge representation.
  • Graph-oriented storage.
  • N-ary, higher order relationships (edges) between graph nodes.
  • Graph traversals and relational-style queries.
  • Customizable indexing.
  • Customizable storage management.
  • Extensible, dynamic DB schema through custom typing.
  • Out of the box Java OO database.
  • Fully transactional and multi-threaded, MVCC/STM.
  • P2P framework for data distribution.
In addition, the project includes several practical domain specific components for semantic web, reasoning and natural language processing. For more information, documentation and downloads, please visit the HyperGraphDB Home Page.

Many thanks to all who supported the project and actively participated in testing and development!

Monday, September 15, 2014

Jayson Skima - Validating JavaScript Object Notation Data

A schema is simply a pattern. A pure form. Computationally it can be used to try to match an instance against the pattern or create an instance from it. That's how XML schema, and even DTD, were traditionally used - mostly for validation but also as an easy way to create a fill-in-the-blanks type of template. Since JSON has been taking over XML by storm, the need for a schema eventually (in my case, finally!) overpowered the simplicity minimalist instinct of JSON lovers. Thus was born JSON Schema. Judging from the earliest activity on the Google Group where the specification committee hung out (https://groups.google.com/forum/#!forum/json-schema), the initial draft was done in 2008 and it was very much inspired by XML Schema. Which is not necessarily a bad thing. For one, the syntax used to define a schema is just JSON. A quick example:

{
"type":"object",
"properties":{
  "firstName":{"type":"string"},
  "age":{"type":"number"}
},
"required":["firstName", "lastName"]
}

That is a complete, perfectly valid and useless schema, defined to validate some imaginary data describing people. People data will be the running example theme in this post, and if you prefer shopping cart orders, well, sorry to disappoint. So, when we validate against that schema, here is what we are enforcing:
  1. The thing must be a JSON object because we put "type":"object".
  2. If it has a firstName property, the value of that property must be a string.
  3. The value of the age property, if present, must be an number.
  4. The properties firstName and lastName are required.
Fairly straightforward. Nevermind that we haven't defined the format (i.e. the sub-schema) for the lastName property, we are still requiring its presence, we just don't care what the value is going to be. So, this is how it goes: a schema is a JSON object where you specify various constraints. If all the constraints are satisfied by a given JSON datum, the schema matches, otherwise it doesn't. The standard standardizes on what are the possible constraint, but with a few extra keywords to structure large, complex schemas.

Why Do I Care?

I can tell why I care then you can decide for yourself. When working with any sort of data structure, if you can't just assume that the structure has the expected form, it becomes very annoying, paranoia strikes at every corner and you become defensive and all sorts of mental disorders can ensue. That's why we use strongly typed languages. But if you, like me, have been doing the unorthodox thing and using JSON as your main data structure instead of spitting out a jar full of beans for your domain model then this sort of trouble befalls you.

A few years ago I made the decision to drop the beans for a project and since then I've been using the strategy in a few other, smaller scale projects where it works even better. But the popular Java libraries for JSON are a disaster. There has already been one JSR 353 about JSON, a "whatever" API, no wonder it seems dead on arrival, almost as bad as Jackson and Gson. And now Java 9 is promising a "lightweight JSON API" which looks like it might actually be well-designed, albeit it has different goals than what I need and simplicity is not one of them. So I wrote mJson. It is a small, 1 Java file, JSON library. I wanted something simple, elegant and powerful. The first two I think I've achieved, the "powerful" part is only half-way there. For instance, many people expect JSON libraries to have JSON<-> POJOs mappings and mJson doesn't, though it has extension points to do your own easily (frankly it takes 1/2 day to implement this stuff if you need it so much).

Modeling with beans offers the type checker's help on validating that structures have the desired form. If you are using JSON only to convert it to Java beans, I suppose the mapping process is a roundabout way to validate, to a certain extent. Otherwise, you either consent to live with the risk of bugs or to the extra bloat needed to defensively code against a structure that may be broken. To avoid these problems, you can write a schema, sort of like your own type rules and make use of it at strategic points in the program. Like when you are getting data from the internet. Or when you are testing a new module. Not that I'm advocating going for JSON + Schema instead of Java POJOs in all circumstances. But you should try it some time, see where it makes sense. By the way, in addition to a being a validation tool schemas are essentially metadata that represents your model (just like XML Schema).
Good. Now I want to give you a quick...

Crash Course on JSON Schema

First, the constraints are organized by the type of JSON. which is probably your starting point to describe how a JSON looks like:
{"type": "string"|"object"|"array"|"number"|"boolean"|"null"|"integer"|"any"}
As you can see, there are two additional possible types besides what you already expected: to avoid floats and to allow any type (which is the same as omitting a type constraint altogether).

Now, given the type of a JSON entity, there are further constraints available. Let's start with object. With properties you describe the object's properties' format in the form of sub-schemas for each possible property you want to talk about. You don't have to list all of them but if your set is exhaustive, you can state that with the additionalProperties keyword:
{
  "properties": { "firstName": { "type":"string"}, etc.... },
  "additionalProperties":false
}
That keywords is actually quite versatile. Here we are disallowing any other properties besides the ones explicitly stated. If instead we want to allow the object to have other properties, we can set it to true, or not set it altogether. Or, the value of the additionalProperties keyword can alternatively be a sub-schema that specifies the form of all the extra properties in the object.

We saw how to specify required properties in the example above. Two other options constrain the size of an object: minProperties and maxProperties. And for super duper flexibility in naming, you can use regular expression patterns for property names - this could be useful if you have some format made of letters and numbers, or UUIDs for example. The keyword for that is patternProperties:
{
  "patternProperties": { "j_[a-zA-Z]+[0-9]+": { "type":"object"} },
  "minProperties":1,
  "maxProperties":100
}
The above allows 1 to 100 properties whose names follow a j_letters_digits pattern. That's it about objects. That's the biggy.

Validating arrays is mainly about validating their elements so you provide a sub-schema for the elements with the items keyword. Either you give a single schema or an array of schemas. A single schema will apply to all elements of an array while an array has to match element for element (a.k.a. tuple typing). That's the basis. Here are the extras: we have minItems and maxItems to control the size of the array; we have additionalItems which only applies when items is an array and it controls what to do with the extra elements when there are some. Similarly to the additionalProperties keyword, you can put false to disallow extra elements or supply a schema to validate them. Finally you can require that all items of an array be distinct with the uniqueItems keyword. Example:
{
  "items": { "type": "string" },
  "uniqueItems":true, 
  "minItems":10,
  "additionalItems":false
}
Here we are mandating exactly 10 unique strings in an array. That's it for areas. Numbers and strings are pretty simple. For number you can define range and divisibility. They keywords are minimum (for >=), maximum (for <=), exclusiveMinimum (if true, minimum means >), exclusiveMaximum (if true, maximum means <). Strings can be validated through a regex with the pattern keyword and by constraining their length with minLength and maxLength. I hope you don't need examples to see string and number validation in action. The regular expression syntax accepted is ECMA 262 (http://www.ecma-international.org/publications/standards/Ecma-262.htm).

Notice that there aren't any logical operators so far. In a previous iteration of the JSON Schema spec (draft 3), some of those keywords admitted areas as values with the interpretation of an "or". For example, the type could be ["string", "number"] indicating that a value can be either a string or a number. Those have been abandoned in favor of a comprehensive set of logical operators to combine schema into more complex validating behavior. Let's go through them: "and" is allOf, "or" is anyOf , "xor" is oneOf, "not" is not. Those are literally to be interpreted as standard logic: not has to be a sub-schema which must not match for validation to succeed, allOf has to be array of schemas and all of them have to match for the JSON to be accepted. Similarly anyOf is an array of which at least one has to match while oneOf means that exactly one of the schemas in the array must match the target JSON. For example to enforce that a person is married, we could declare that it must either have a husband or a wife property, but not both:
{ "oneOf":[
  {"required":["husband"]},
  {"required":["wife"]}
}
If you have a predefined list of values, you could use enum. For example, a gender property has to be either "male" or "female":
{ 
  "properties":{
    "gender":{"enum":["male", "female"]}
  }
}

With that, you know almost everything there is to know about JSON Schema. Almost. Above I mentioned "a few extra keywords to structure large complex schemas". I exaggerated. Actually there is only one such keyword: $ref (the related keyword id is not really needed). $ref allows you to refer to schemas defined elsewhere instead of having to spell out again the same constructs. For example if there is a standard format for address somewhere on the internet, with a schema defined for it and if that schema can be obtained at http://standardschemas.org/address (a made up url), you could do:
{ 
  "properties":{
    "address":{"$ref":"http://standardschemas.org/address"}
  }
}
The fun part of $ref is that the URI can be relative to the current schema and and you can use JSON pointer in a URI fragment (the part after the pound # sign) to refer to a portion of the schema within a document! JSON Pointer is a small RFC (http://tools.ietf.org/html/rfc6901) that specs out Unix path-like expressions to navigate through properties and arrays in a complex JSON structure. For example the expression /children/0/gendre refers to the gendre of the first element in a children array property. Note that only the slash is used, no brackets or dots and that's perfectly enough. If you want to escape a slash inside a property name write ~1 and to escape a tilde write ~0. To gets your hands on some presumably rock solid zip code validation, for example, you could do:
{ 
  "properties":{
    "zip":{"$ref":"http://standardschemas.org/address#/postalCode"}
  }
}
So that means you can define the schemas for your RESTful API at a standard location and publish those and/or refer to them on your API responses. Any JSON validator has to be capable of fetching the right sub-schema and a good implementation will cache them so you don't have to worry about network hops. A reference URI can be relative to the current schema so if you have other schemas on the same base location, they can refer to each other irrespective of where they are deployed. As a special case to that, you can resolve fragments relative to the current schema. For example:
{ 
  "myschemas": {
     "properName": { "type":"string", "pattern":"[A-Z][a-z]+"}
  }
  "properties":{
    "firstName":{ "$ref":"#/myschemas/properName"},
    "lastName":{ "$ref":"#/myschemas/properName"}
  }
}
Because the JSON Schema specification allows properties that are not keywords, we can just pick up a name, like myschemas here, as a placeholder for sub-schemas that we want to reuse. So we've defined that a proper name must start with a capital letter followed by one or more lowercase letters, and then we can reuse that anywhere we want. This is such a common pattern then the specification has defined a keyword to place such sub-schemas. This is the definitions keyword which must appear at the top-level, has no role in validation per se, but is just a placeholder for inline schemas. So the above example should be properly rewritten as:
{ 
  "definitions": {
     "properName": { "type":"string", "pattern":"[A-Z][a-z]+"}
  }
  "properties":{
    "firstName":{ "$ref":"#/definitions/properName"},
    "lastName":{ "$ref":"#/definitions/properName"}
  }
}
To sum up, using the $ref keyword and the definitions placeholder is all you need to structure large schemas, split them into smaller ones, possibly in different documents, refer to standardized schemas over the internet etc.

Resources

Now to make use of JSON schema, there aren't actually that many implementations available yet. The popular (and bloated) Jackson supports draft 3 so far, and this part doesn't seem actively maintained. One of the JSON Schema spec authors has implemented full support on top of Jackson: https://github.com/fge/json-schema-validator, so you should know about that implementation especially if you are already a Jackson user. But if you are not, I want to point you to another option available since recently:mJson 1.3 which supports JSON Schema Draft 4 validation:

Json schema = Json.read(new URL("http://mycompany.com/schemas/model");
Json data = Json.object("firstName", "John", "lastName", "Smith").set("children", Json.array().add(/* etc... */));
Json errors = schema.validate(data);
for (Json e: errors.asJsonList())
   System.out.println("JSON validation error:" + e);

In all fairness, some of the other libraries also have support for generating JSON based on a schema, with default values specified by the default keyword which I haven't covered here. mJson doesn't do that yet, but if there's demand I'll put it in. The keywords I haven't covered are title, description (meta data keywords not used during validation) and id. To become and expert, you can always read the spec. Here it is, alongside some other resources:



For Dessert

To part ways, I want to leave you with a little gem, one more resource. Somebody came up with a much more concise language for describing JSON structures, It's called Orderly, it compiles into JSON Schema and I haven't tried it. If you do, please report back. It's at http://orderly-json.org/  and it looks like this:

object {
  string name;
  string description?;
  string homepage /^http:/;
  integer {1500,3000} invented;
}*;

Tuesday, August 26, 2014

Where are the JVM Scripting IDEs?

The raise of scripting languages in the past decade has been spectacular. And since the JVM platform is the largest, a few were designed specifically for that platform while many others were also implemented on top. It is thus that we have JRuby, Jython, Groovy, Clojure, Rhino, JavaFX and the more obscure (read more fun) things like Prolog and Scheme implementations. Production code is being written, dynamic language code bases are growing, whole projects don't even have any Java code proper. Yet when it comes to tooling, the space is meager to say the least.

What do we have? In Eclipse world, there's the Dynamic Languages Toolkit which you can explore at http://www.eclipse.org/dltk/, or some individual attempts like http://eclipsescript.org/ for the Rhino JavaScript interpreter or the Groovy plugin at http://groovy.codehaus.org/Eclipse+Plugin. All of those provide means to execute a script inside the Eclipse IDE and possible syntax highlighting and code completion. The Groovy plugin is really advanced in that it offers debugging facilities, which of course is possible because the Groovy implementation itself has support for it. That's great. But frankly, I'm not that impressed. Scripting seems to me a different beast than normal development. Normally you do scripting via a REPL, which is traditionally a very limited form of UI because it's constrained by the limitation of a terminal console. What text editors do to kind of emulate a REPL is let you select the expression to evaluate as a portion of the text, or take everything on a line, or if they are more advanced, then use the language's syntax to get to the smallest evaluate-able expression. It still feels a little awkward. Netbeans' support is similar. Still not impressed. "What more do you want?", you may ask. Well, don't know exactly, but more. There's something I do when I write code in scripting languages, a certain state of mind and a way of approaching problems that is not the same as with the static, verbose languages such as Java.

The truth is the IDE brought something to Java (and Pascal and C++ etc.) that made the vast majority of programmers never want to look back. Nothing of the sort has happened with dynamic languages. What did IDEs bring? Code completion was a late comer, compared to integrated debugging and the project management abilities. Code completion came in at about the same time as tools to navigate large code bases. Both of those need a structured representation of the code and until IDEs got powerful and fast enough to quickly generate and maintain in sync such a representation, we only had an editor+debugger+a project file. Now IDEs also include anything and everything around the development process, all with the idea that the programmer should not leave the environment (nevermind that we prefer to take a walk outside from time to time - I don't care about your integrated browser, Chrome is an Alt-tab away!).

Since I've been coding with scripting languages even before they became so hot, I had that IDE problem a long time ago. That is to say, more than 10 years ago. And there was one UI for scripting that I thought was not only quite original,  but a great match for the kind of scripting I was usually doing, namely exploring and testing APIs, writing utilities, throw away programs, prototypes, lots of activities that occasionally occupy a bigger portion of my time than end-user code.  That UI was the Mathematica notebook. If you have never heard of it, Mathematica (http://www.wolfram.com/mathematicais a commercial system that came out in the 90s and has steadily been growing its user base with even larger ambitions as of late. The heart of it is its term-rewrite programming language, nice graphics and sophisticated math algorithms, but the notion of a notebook, as a better than REPL interface, is applicable to any scripting (i.e. evaluation-based, interpreter) language. A notebook is a structured document that has input cells, output cells, groups of cells, groups of groups of cells etc. The output cells contain anything that the input produces which can be a complex graphic display or even an interactive component. That's perfect! How come we haven't seen it widely applied?

Thus Seco was born. On a first approximation, Seco is just a shell to JVM dynamic languages that imitates Mathematica's notebooks. It has its own ambition a bit beyond that, moving towards an experimental notion of software development as semi-structured evolutionary process. Because of that grand goal, which should not distract you from the practicality of the tool that I and a few friends and colleagues have been using for years, Seco has a few extras, like the fact that your work is always persisted on disk, the more advanced zoomable interface beyond the mere notebook concepts. The best way to see why this is worth blogging about is to play with it a little. Go visit http://kobrix.com/seco.jsp.

Seco was written almost in its entirety by a former Kobrix Software employee, Konstantin Vandev. It is about a decade old, but active development stopped a few years ago. I took a couple of hours here and there in the past months to fix some bugs, started implementing a new feature to have a centralized searchable repository for notebooks so people can backup their work remotely, access it and/or publish it. That feature is not ready, but I'd like to breathe some life into the project by making a release. So consider this an official Seco 0.5 release which besides the aforementioned bug fixes upgrades to the latest version of HyperGraphDB (the backing database where everything get stored) and removes dependency on the BerkeleyDB native library so it's pure Java now.