Generation 5 Towards Intelligent Systems Wed, 20 Jun 2012 15:08:00 +0000 en hourly 1 Generation5 Beyond Freebase and DBpedia Tue, 19 Jun 2012 21:01:01 +0000 Paul Houle
The triumph of generic databases

The computerization of commonsense knowledge goes back at least to Ross Quillian’s paper from the 1969 book Semantic Information Processing. Ross used methods that aren’t that different from what I use today, but he was able to store just a few hundred concepts in his computer.

The Cyc project, starting in the 1980s contained about 3 million facts. It was successful on it’s own terms, but it didn’t lead to the revolution in natural language processing that it promised. WordNet, from the same era, documents about 120,000 word senses, but like Cyc, hasn’t had a large engineering impact.

DBpedia and Freebase have become popular lately, I think because they’re a lot like traditional databases in character. For a person, place or creative work you’ve got the information necessary to make a ‘Pokemon card’ about the topic. With languages like SPARQL and MQL it’s possible to write queries you’d write in a relational database, so people have an idea what to do with it.

DBpedia and Freebase are much larger than the old commonsense databases. The English Dbpedia contains 4 million topics derived from Wikipedia pages and Freebase contains 24 million facts about 600 million topics. It’s hard to quantify it, but subjectively, people feel like Wikipedia contains most of the concepts that turn up when they are reading or thinking about things. Because the new generic databases are about concepts rather than words, they are inherently multilingual.

DBpedia Spotlight is the first of a breed of language processing products that use world knowledge instead of syntactic knowledge. Using a knowledge base created from DBpedia and Wikipedia, Spotlight gets accuracy comparable to commercial named entity recognition systems — although Spotlight uses simple methods and, so far, has made little of the effort a commercial system would to systematically improve accuracy.

pain points

The greatest difficulty with generic databases today is documentation. :BaseKB, derived from Freebase, has roughly 10,000 classes and 100,000 properties. Finding what you’re looking for, and even knowing if it is available, can be difficult.

Data quality is another problem, but it’s hard to characterize. The obvious problem is that you write queries and get wrong answers. If, for instance, you ask for the most important cities in the world, you might find that some big ones are missing but that some entities other than cities, such as countries, have been mistyped. If you look for the people with the latest birthdates, some may be in the future — some may be wrong, some may be anticipated, and others could be mistyped science fiction characters.

Quality problems can have an impact beyond their rate of incidence. Wikipedia, for instance, contains about 200,000 pages documenting the tree of biological classification. Wikipedians don’t have real tools for editing hierarchies, so any edit they make to the tree could break the structure of the tree as a whole. Even if tree-breaking errors occurred at a rate of 1-in-50,000 it would be impossible (or incorrect) to use algorithms on this tree that assume it is really a tree.

Quality problems of this sort can be found by systematic testing and evaluation — these can be repaired at the source or in a separate data transformation step.

intelligence without understanding

One answer to the difficulties of documentation that use algorithms that use predicates wholesale; one might filter or weight predicates, but not get two concerned about the exact meaning of any particular predicate. Systems like this can make subjective judgments about importance, offensiveness, and other attributes of topics. The best example of this is a satisfying music recommendation engine developed with less than 10% of the expense and effort that goes into something like Last.FM or Pandora.

These approaches are largely unsupervised and work well because the quality (information density) of semantic data is higher than data about keywords, votes, and other inputs to social-IR system. Starting with a good knowledge base, the “cold start” problem in collaborative filtering can be largely obliterated.

beyond predicates

In the RDF model we might make a statement like

:Robert_Redford :actedIn :Sneakers .

we’ve yet to have a satisfactory way to say things about this relationship. Was he the star of this movie? Did he play a minor part? Who added this fact? How must do we trust it?

These issues are particularly important for a data Wiki, like Freebase, because provenance information is crucial if multiple people are putting facts into a system.

These issues are also important in user interfaces because, if we know thousands of facts about a topic, we probably can’t present them all, and if we did, we can’t present them all equally. For an actor, for instance, we’d probably want to show the movies they starred in, not the movies they played a minor role in.

Google is raising the standard here: this article shows how the Google Knowledge Graph can convert the “boiling point” predicate into a web search and sometimes shows the specific source of a fact.

This points to a strategy for getting the missing information about statements — if we have a large corpus of documents (such as a web crawl) we search that corpus for sentences that discuss a fact. Presumably important facts get talked about more frequently than unimportant facts, and this can be used as a measure for importance.

I’ll point out that Common Crawl is a freely available web crawl hosted in Amazon S3 and that by finding links to Wikipedia and official web sites one can build a corpus that’s linked to Dbpedia and Freebase concepts.

lexical data

Although Freebase and DBpedia know of many names for things, they lack detailed lexical data. For instance, if you’re synthesizing text, you need to know something about the labels you’re using so you can conjugate verbs, make plurals, and use the correct article. Things are relatively simple in English, but more complex in languages like German. Parts-of-speech tags on the labels would help with text analysis.

YAGO2 is an evolving effort to reconcile DBpedia’s concept database with WordNet; it’s one of the few large-scale reconciliations that has been statistically evaluated and was found to be 95% accurate. YAGO2′s effectiveness has been limited, however, by WordNet’s poor coverage compared with DBpedia. There’s a need for a lexical extension to DBpedia and Freebase that’s based on a much larger dictionary than WordNet.

relationship with machine learning

I was involved with the text classification with the Support Vector Machine around 2003 and expected to see the technology to become widely used. In our work we found we got excellent classification performance when we had large (10,000 document) training sets, but we got poor results with 50 document sets.

Text classification hasn’t been so commercially successful and I think that’s because few users will want to classify 10,000 documents to train a classifier, and even in large organizations, many classes of documents won’t be that large (when you get more documents, you slice them into more categories.)

I think people learn efficiently because they have a large amount of background knowledge that helps them make hypotheses that work better than chance. If we’re doing association mining between a million signals, there are a trillion possible associations. If we know that some relationships are more plausible than others, we can learn more quickly. With large numbers of named relationships, it’s possible that we can discover patterns and give them names, in contrast to the many unsupervised algorithms that discover real patterns but can’t explain what they are.

range of application

Although generic databases seem very general, people I talk to are often concerned that they won’t cover terms they need, and some cases they are right.

If a system is parsing notes taken in an oil field, for instance, the entities involved are specific to a company — Robert Redford and even Washington, DC just aren’t relevant.

Generic databases apply obviously to media and entertainment. Fundamentally, the media talks about things that are broadly known, and things that are broadly known are in Wikipedia. Freebase adds extensive metadata for music, books, tv and movies, so that’s all the better for entertainment.

It’s clear that there are many science and technology topics in Wikipedia in Freebase. Freebase has more than 10,000 chemical compounds and 38,000 genes. Wikipedia documents scientific and technological concepts in great detail, so there are certainly the foundations for a science and technology knowledge base there — evaluating how good of a foundation this is and how to improve it is a worthwhile topic.

web and enterprise search, by embedding semantic data, makes radical web browsing and search. Semantic labels applied to words can be resolved to graph fragments in DBpedia, Freebase and similar databases. A system that constructs a search index based on concepts rather than words could be effective at answering many kinds of queries. The same knowledge base could be processed into something like the Open Directory, but better, because it would not be muddled and corrupted by human editors.

Concept-based search would be effective for the enterprise as well, where it’s necessary to find things when people use different (but equivalent) words in search than exist in the document. Concept-based search can easily be made strongly bilingual.

middle and upper ontologies

It is exciting today to see so many knowledge bases making steady improvements. Predicates in Freebase, for instance, are organized into domains (“music”) and further organized into 46 high level patterns such as “A created B”, “A administrates B” and so forth. This makes it possible to tune weighting functions in an intuitive, rule-based, manner.

Other ontologies, like UMBEL, are reconciled with generic databases but reflect a fundamentally different point of view. For instance, UMBEL documents human anatomy in much better detail than Freebase so it could play a powerful supplemental role.


generic databases, when brought into confrontation with document collections, can create knowledge bases that enable computers to engage with text in unprecedented depth. They’ll not only improve search, but also open a world of browsing and querying languages that will let users engage with data and documents in entirely new ways.

]]> 0
What Is Ookaboo? Fri, 20 Aug 2010 19:43:11 +0000 Paul Houle

Ookaboo is a collection of images that are indexed by terms from the semantic web, or, the web of linked data. Ookaboo provides both a human interface and a semantic API. Ookaboo has two goals: (i) to dramatically improve the state of the art in image search for both humans and machines, and (ii) to construct a knowledge base about the world that people live in that can be used to help information systems better understand us.

Semantic Web, Linked Data

In the semantic web, we replace the imprecise words that we use everyday with precise terms defined by URLs. This is linked data because it creates a universal shared vocabulary.

For an example, in conventional image search, a person might use the word “jaguar” to search for

Note in the cases above, there are pages in Wikipedia about each of the topics above: it’s reasonable, therefore, that we could use these URLs as a shared vocabulary for referring to these things. However, we get some benefits when we use URLs that are linked to machine-readable pages, such as, or

Pages on Ookaboo are marked up with RDFa, a standard that lets semantic web tools extract machine readable information from the same pages that people view.

Named entities

Ookaboo is oriented around named entities, particularly ‘concrete’ things such as places, people and creative works. With current technology, it’s more practical to create a taxonomy of things like “Manhattan”, “Isaac Asimov” and “The Catcher In the Rye” than it is to tackle topics like “eating”, “digestion” and “love”. We believe that a comprehensive exploration of named entities will open pathways to an understanding of other terms, and hope to extend Ookaboo’s capabilities as technology advances.

Ookaboo semantic API

The Ookaboo semantic API is a simple mechanism to query Ookaboo for photographs about named entities defined by URLs. Based on JSON, it makes it easy for automated systems to find photographs about topics. You can get started in a few minutes by reading the documentation and creating and API Key.

Creative Commons

All images in Ookaboo are either public domain or under Creative Commons licenses. As does Wikimedia Commons, we exclude images with a “noncommercial” clause, but unlike Wikimedia, we refuse non-free images with a claim of “fair use” and we permit images that contain a “nonderivative” clause.

Visitors to Ookaboo and users of the API are invited to use images they find in a manner consistent with their licensing. We believe that a link to the picture metadata page on Ookaboo (example here) for an image satisfies the “BY” requirement in creative commons, because our pages trace the provenance of images, however, we advise you to contact the creator of an image if you have any questions — for instance, the creator of an image can grant you the right to use an image under terms different than the creative commons license.

Geospatial Reasoning

Ookaboo is initially focused on things that are located in space: things like countries, cities, administrative divisions, monuments, buildings and bridges. This part of the Ontology2 strategy of exploiting “unreasonably effective” strategies for organizing information. As Ookaboo evolves, expect to see spatial knowledge reflected in both the U.I. and API.

]]> 0
Nested Functions: C# that looks like Javascript Fri, 14 May 2010 15:10:04 +0000 Paul Houle Introduction

Anonymous functions and closures are a language feature that,  in many ways,  allow programmers to reshape the syntax of a language.  Although often associated with highly dynamic languages such as Lisp and TCL and moderately dynamic languages,  such as Javascript,  C# shows that closures retain their power in statically typed languages.  In this article,  I identify a language feature that I like from the Pascal language and ‘clone’ it using closures in C#.

Nested Functions in Pascal

Pascal (a historically important teaching language that’s as statically typed as you can get) allows programmers to define a named function inside the scope of an enclosing function.  This is a mechanism for encapsulation,  much like the mechanisms of object-oriented programming.  Here’s an example of how nested functions work in Pascal,  borrowed from Wikipedia:

01 function E(x: real): real;
02    function F(y: real): real;
03    begin
04        F := x + y
05    end;
06 begin
07     E := F(3)
08 end;

Note that you return values in Pascal by assigning to a ‘variable’ with the same name as the function.  The F function is nested inside E,  so (i) F can only be called inside function E and (ii) F has access to variables defined inside function E.  Now,  this example is contrived (it’s an obfuscated way to write E(x)=x+3,  but this is a useful tool if you’ve got a chunk of code that is reused several times inside a function.

Nested Functions in C#

Although C# doesn’t explicitly support nested functions,  it’s easy to simulate this feature using an anonymous function (a lambda.)  To do this,  I use an idiom which is common in Javascript,  the naming of an anonymous function by assigning it to a named variable:

09 static double E(double x) {
10    Func<double,double> F = (double y) => x+y;
11    return F(3.0);
12 }
Note that F is a variable with a delegate type:  Func<double,double>,  but the call to F on line 11 looks exactly like an ordinary function call.  The C# compiler is doing some funky things behind the scenes to make this work,  but it’s most important to note that you’re not allowed to use ‘var’ on line 10,  because the compiler infers the type of the function from the left hand side (LHS) of the expression.

But why?

I was writing an algorithm over trees the other day,  and noticed that there was a chunk of code that I was repeating in multiple places inside a single function;  as this chunk was increasing in complexity,  I felt alarmed by the duplication.  I could have spun the “inner” function into a named method of the class the “outer” function was in,  but that would have meant moving certain variables out of method scope into class scope — I didn’t like this,  because the rest of the class had no business accessing these variables.

I could have split out the tree algorithm into a separate class,  but that bulks up the code and creates more artifacts to maintain.  Splitting the algorithm into a separate class might let me enable reuse by adding extension points and using the Visitor pattern,  but I could have ended up creating an interface and several new classes…  while never getting around in the future to take advantage of that promised reuse.

Object-functional languages like C# offer programmers new choices when it comes to encapsulation,  inheritance and reuse:  the study of patterns and idioms used in languages such as LISP and Javascript can be fruitful for the C# programmer,  and proves that the many of the strengths that people associate with dynamically typed languages can be enjoyed in statically typed languages.

]]> 0
Closures, Javascript And The Arrow Of Time Tue, 23 Jun 2009 15:24:40 +0000 Paul Houle time-machine


In most written media,  time progresses as you move down a page:  mainstream computing languages are no different.   Anonymous Closures are a language mechanism that,  effectively,  lets programmers create new control structures.  Although people associate this power with exotic dynamic languages such as FORTH,  Scheme and TCL,   closures are becoming a feature of  mainstream languages such as Javascript and PHP (and even static languages such as C#.)

Although this article talks about issues that you’ll encounter in languages such as C# and Scheme,  I’m going to focus on Javascript written on top of the popular JQuery library:  I do that because JQuery is a great toolkit that lets programmers and designers of all skill levels do a lot by writing very little code.  Because JQuery smooths away low-level details,  it lets us clearly illustrate little weirdnesses of its programming model. Although things often work “just right” on the small scale,  little strange things snowball in larger programs — a careful look atthe  little problems is a key step towards the avoidance and mitigation of problems in big RIA projects.

Time In Programming Languages

Absent control structures,  execution proceeds from top to bottom in a computer program,  consider:

01 document.write('Hello');
02 document.write(' ');
03 document.write('World!');

Today’s languages are based on structured programming,  which encourages to use a limited number of control structures,  such as conditional expressions

04 if(happy=true) {
05    document.write('Hello');
06 } else {
07    document.write('Goodbye Cruel');
08 }
10 document.write('  ');
11 document.write('World!');

and loops

12 var sum=0;
13 for(i=0;i<x.length;i++) {
14   sum += x[i];
15 }

With GOTO expunged from the programming canon,   execution proceeds from top to down,   except for a limited number of  control structures that let us return to the beginning or jump to the end of a block.  It’s a simple and intuitive model that we don’t often think about,  but it’s easy to write code that breaks this model when programming with closures.

Emulating Foreach() in Javascript with Closures

Popular languages such as PHP and C# have a foreach() statement that loops over the elements of a collection without the need to keep track of a counter variable.  Popular Javascript frameworks such as JQuery and Dojo implement foreach functions;  with the magic of closures,  we can get an effect similar to a built-in foreach construct.  For instance,  in JQuery,  we can write

16 var sum=0;
17 $.each(x,function(i,val) {
18    sum += arg;
19 });

In this case,  the second argument of the $.each (JQuery.each) function is a anonymous function.  The special property it has,  as a closure,  is that the anonymous function has access to variables in the enclosing scope,  namely sum.  This allows the code in 16-19 to look and work a lot like the code in 12-15,  despite the fact that line 18 is in a different function than line 16.

As an aside,  this kind of higher-order function is often simple to implement;  although the real $.each() function is capable of iterating over a wide variety of types (thus more complex),  a function that can iterate over arrays is as simple as

20 function foreachArrayElement(arr,loopBody) {
21    for(i=0;i<x.length;i++) {
22       loopBody(i,x[i])
23    }
24 }

Breaking The Conventions of Time

Although you’ve got other choices,  closures are  a popular and concise way to write callbacks for aysnchronous communication.  Suppose that we want to GET a packet of JSON data from a server and update a user inteface.  In JQuery we could write:

25 $.get("JsonService", {}, function(data, textStatus) {
26     var obj=JSON.parse(data);
27     someUIElement.update(obj);
28  });
30 ... do more stuff ...

Note that time isn’t progressing downward in this script anymore.  We got from line 25 directly to line 30.  The code between 26-28 only executes when data gets back from the server.  In this case,  this behavior is caused by the asynchronous nature of XMLHttpRequest,  for which $.get() is  wrapper.  Similar behavior can be had by attaching an event handler to a DOM object,  or even by adding a function reference to an object or the global scope,  causing the anonymous function to execute when some other function,  defined elsewhere,  executes.

“Race Conditions” In Asynchronous Communication

Although it’s not wrong to break the conventions of time,  doing so risks the introduction of tricky and subtle errors.  Suppose we’re building an AJAX application where we’d like to cache an object,  retrieving the object only if we don’t have a copy in cache.  One obvious approach is to write

31 if (cachedCopy==undefined) {
32  $.get("DataService",{},function(data,textStatus) {
33       cachedCopy=data;
34       updateUIOne(data);
35   }
36 } else {
37   updateUIOne(cachedCopy);
38 }
40 updateUITwo();

UpdateUIOne(data) populates the user interface with the data.  UpdateUITwo() makes some other change to the user interface.

Unfortunately,  this code has a potential bug that’s hidden by the conventions of time.  When data is in the cache,  line 37 executes before line 40,  so that updateUIOne(data) is called before updateUITwo().   When data is not in the cache,  line 40 executes before 33 (communication callbacks don’t run in Javascript until the code being run returns to the browser.)  It’s all fine if the order in which you run updateUIOne and updateUITwo doesn’t matter — and it’s a disaster if it does…  This kind of code does one thing sometimes and does another thing other times:  code of this type is difficult to test,  and leads to the kind of bug that drives people to drink.

The real answer to these problems is to take a systematic approach to asynchronous communication:  any strategy based on band-aids that work here or these is going will eventally break down under the weight of growing requirements.  That said,  I’ll offer a few strategies for patching this kind of problem:

  1. If we could move updateUITwo() from line 40 to before line 31,  updateUITwo() and updateUIOne() could be run in a consistent order.
  2. Modifying updateUIOne(data) to call updateUITwo() after it completes also results in a consistent order
  3. We can schedule UpdateUIOne to run AFTER the executing code returns to the browser,  by replacing line 34 with
41    setTimeout(function() { updateUIOne(data) },1);

Structural Instability

Let’s consider another example where the conventions of time are treacherous:  suppose we need to retrieve three chunks of data from the server

42 $.get("Chunk1Server",{},function(data1,textStatus1) {
43   UpdateUIOne(data1);
44   $.get("Chunk2Server",{},function(data2,textStatus2) {
45       UpdateUITwo(data2);
46        $.get("Chunk3Server",{},function(data3,textStatus3) {
47            UpdateUIThree(data3);
48       });
49    });
50 });

Note that this specific chunk of code executes exactly the way that it looks.  Although there are some gaps,  execution proceeds progressively from line 42 to 47.  The “nested closure” idiom is common in Javascript code,   probably because it looks a lot the synchronous code that does the same job,  but it’s treacherous:  a tiny change in the code can break the conventions of time,  causing strange things to happen.  This property is called structural instability.

For example,  the above code might return directly to the browser,   eliminating the possibility of the “race conditions” seen in the last section. If we add the following line:

51 UpdateUIFour();

we’ll find that UpdateUIFour() runs ~before~ the other other functions.  This isn’t bad behavior if we expect it,  but could have spooky effects if we don’t.  This example is trivial,  but similar mistakes can be made in any of the closures,  resulting in errors that can be quite baffling.

The structural instability of nested closures pushes complex AJAX applications towards other styles of coding,  such as a state-machine organization.

Order Of Arguments

This is small compared to the other issues,  but the order of arguments of callback functions can add to the the confusions around the time:  The $.get() function provides a good example,  since it support four parameters:

51 $.get(url,parameters,callback,type);

all of the parameters,  except for the url,  are optional.   It’s generally good to put less used optional parameters towards the right,  but placing the type declaration after the callback disturbs the flow of time and hurts readability.  If you choose to have the return data parsed as JSON format,

52 $.get(targetUrl,{},function(data,textStatus) {
53    ... handler code ...
54 },"json");

you’ll see that the type specification occurs after the callback.  This,  by itself,  adds a small amount of confusion,  but when you’ve got multiple nested closures or if you were computing the values defined after the callback,  code becomes immediately difficult to understand.

Event Handlers And “COMEFROM”

In 1973 R. Lawrence Clark kiddingly introduced a COMEFROM statement in response to Edgser Dijkstra’s famous “GO TO Considered Harmful”,  from the associated Wikipedia Article,  I take an example of a program in a hypothetical BASIC dialect that uses COMEFROM:

57 PRINT "HELLO, "; A$
58 REM

COMEFROM is a bit disturbing because it involves an “action at a distance:”  line 55 modifies the behavior of line 58.  A person looking a line 58 in isolation would have a hard time understanding what happens in the program when execution reaches line 58.

The use of event handlers,  both built in and custom,  has an effect similar to COMEFROM.  Consder the common example

59 $("#activeButton").click(function() {
60    ... handle button click...
61 }

Once more,  the code at line 60 will execute after the code that follows line 61.  Line 59 modifies the behavior of a built-in UI element,  which might be defined like

62 <input type="button" id="activeButton" value="Button X">

Looking at line 62 alone,  it’s impossible to tell what what happens when you click on the button;  an “onClick” handler in line 62 would be more immediately obvious in function.  That said,  events are a proven model for GUI programming that introduces a useful decoupling between the parts of a system — a looser coupling that lets us build bigger systems.  In the context of JQuery,  it’s great to be able to write something like

63 <div class="SpecialArea">... content ...</div>

Javascript code can then use the $() function to attach styles,  event handlers and to manipulate DOM elements inside the .SpecialArea to give the area specific behaviors and appearance.  This lets a developer provide a powerful but easy-to-use abstraction to a designer:   adding a single class specification lets us reuse tens of CSS specifications and several event handlers.   This is a big win,  but we can’t forget the hidden COMEFROM that’s implied.

Events are dynamically added and removed,  even in static languages such as Java and C#.  Although this dynamism adds flexibility,  it comes at a cost.  IDEs such as Eclipse and Visual Studio can do a great job of helping programmers understand static method calls:  for instance,  you can click on the name of a method and see a list of places where this method is called.  Because events aren’t amenable to static analysis,  inappropriate use (or even appropriate use) of events impair some of the tools we have for understanding programs.

Events are a great tool,  but programmers need to understand the weirdness that underlies them.  Used deliberately,  events can provide a concise and scalable way to move information and direct control in an application.  Poorly used events can cause software to degenerate to “spaghetti code”.


Closures are a powerful and concise way to express your intentions to a computer:  however,  closures break some of the intuitive assumptions that people use to understand software — specifically,  the idea that time moves downward through the execution of a procedure.  This leads to a kind of structural instability,   where software that’s perfectly simple and clear at a simple level can suddenly get much harder to understand when several complications converge.  This article uses JQuery-Javascript as an example not because JQuery is bad,  but because it’s good:   it’s a tool that helps beginning (and advanced) programmers accomplish a lot with little code.  Yet,  as we build increasingly complex applications,  we need a clear understand of the big and little weirdnesses of RIA programming.

]]> 2
Getting a Good Track From a Garmin eTrex Wed, 03 Jun 2009 16:15:45 +0000 Paul Houle Introduction

The Garmin eTrex series are popular handheld GPS units.  The eTrex H is a inexpensive unit with excellent sensitivity and accuracy,  and the eTrex Vista HCx is a favorite of OpenStreetMap contributors because it accepts a microSD card which can hold OSM maps.  I intended to use my Vista HCx to contribute to OSM and to georeference photographs,  but I was shocked to discover that eTrex units remove time information from saved tracks. This means that saved tracks aren’t useful if you want to georeference photographs with an application like GPicSync.  There’s a simple solution to this problem:  avoid using saved tracks,  and download the “Active Track” instead.

Saved Tracks

The eTrex units “dumb down” saved tracks by (i) reducing the number of points, (ii) removing time information,  and (iii) applying spatial filtering.  A saved track looks like this in Garmin MapSource:


This track could be useful for mapping,  but lacking time information,  it can’t be used to reference events that occur at a particular moment in time.  Although the eTrex has a number of menu items for configuring the active track (to,  for instance,  increase the sample rate at which points are taken) there are no options that influence the information stored in saved tracks.

What’s in your eTrex?

Once you’ve loaded tracks from your eTrex, you’ll see that there are both saved tracks and a collection of “active logs”:


You’ll usually find more than one active log:  new ones are created in order when the unit is power cycled or the track is otherwise reset.  Active logs respect the sampling rate you’ve specified,  keep timestamps,  and are less filtered than saved tracks:



Avoid saved tracks if timestamps matter to you.  You’ll lose the advantage of using saved tracks to organize your tracks,  but this should matter little if you’re georeferencing photographs:  GPicSync looks at all of the tracks in the .gpx file and uses timestamps to match everything up.

]]> 0, My First Web 3.0 Site Thu, 02 Apr 2009 02:23:08 +0000 Paul Houle I haven’t blogged much in the last month because I’ve been busy. I’ve got a day job, and I’m a parent, and I’ve also been working on a new site: Creative Commons Car Pictures.  What you see on the surface of “Car Pictures” might be familiar,  but what’s under the surface is something you’ve probably never seen before:


A collection of car pictures under a Creative Commons license,  it was built from a taxonomy that was constructed from Dbpedia and Freebase.  My editor and I then used a scalable process to clean up the taxonomy,  find matching images and enrich image metadata.  As you can imagine,  this is a process that can be applied to other problem domains:  we’ve tested some of the methodology on our site Animal Photos! but this is the first site designed from start to finish based on the new technology.

Car Pictures was made possible by data sets available on the semantic web,  and it’s soon about to export data via the semantic web.  We’re currently talking with people about exporting our content in a machine-readable linked data format — in our future plans,  this will enable a form of semantic search that will revolutionize multimedia search:  rather than searching for inprecise keywords,  it will become possible to look up pictures,  video and documents about named entities found in systems such as Freebase,  Wikipedia,  WordNet,  Cyc and OpenCalais.

In the next few days we’re going to watch carefully how Car Pictures interacts with users and with web crawlers.  Then we’ll be adding content,  community features,  and a linked data interface.   In the meantime,  we’re planning to build something at least a hundred times bigger.

Quite literally, thousands of people have contributed to Car Pictures,  but I’d like to particularly thank Dan Brickley,  who’s been helpful in the process of interpreting Dbpedia with his ARC2 RDF tools,  and Kingsley Idehen,  who’s really helped me sharpen my vision of what the semantic web can be.

Anyway,  I’d love any feedback that you can give me about Car Pictures and any help I can get in spreading the word about it.  If you’re interested in making  similar site about some other subject,  please contact me:  it’s quite possible that we can help.

]]> 1
First-Class Functions and Logical Negation in C# Mon, 09 Mar 2009 13:50:39 +0000 Paul Houle negation


Languages such as LISP,  ML,  oCaml F# and Scala have supported first-class functions for a long time.  Functional programming features are gradually diffusing into mainstream languages such as C#,  Javascript and PHP.   In particular,  Lambda expressions,  implicit typing,  and delegate autoboxing have made  C# 3.0 an much more expressive language than it’s predecssors.

In this article,  I develop a simple function that acts on functions:  given a boolean function fF.Not(f) returns a new boolean function which is the logical negation of f.  (That is,  F.Not(f(x)) == !f(x)).   Although the end function is simple to use,  I had to learn a bit about the details of C# to ge tthe behavior  I wanted — this article records that experience.


I was debugging an application that uses Linq-to-Objects the other day,  and ran into a bit of a pickle.  I had written an expression that would return true if all of the values in a List<String> were valid phone numbers:

[01] dataColumn.All(Validator.IsValidPhoneNumber);

I was expecting the list to validate successfully,  but the function was returning false.  Obviously at least one of the values in the function was not validating — but which one?  I tried using a lambda expression in the immediate window (which lets you type in expression while debugging) to reverse the order of matching:

[02] dataColumn.Where(s => !ValidatorIsValidPhoneNumber(x));

but that gave me the following error message:

[03] Expression cannot contain lambda expressions

(The expression compiler used in the immediate window doesn’t support all of the language features supported by the full C# compiler.)  I was able to answer find the non matching elements using the set difference operator

[04] dataColumn.Except(dataColumn.Where(Validator.IsValidPhoneNumber).ToList()

but I wanted an easier and more efficient way — with a logical negation function,  I could just write

[05] dataColumn.Except(F.Not(Validator.IsValidPhoneNumber).ToList();

Try 1: Extension Method

I wanted to make the syntax for negation as simple as possible.  I didn’t want to even have to name a class eliminate the need to name a class,  so I tried the following extension method:

[06] static class FuncExtensions {
[07]    public static Func<Boolean> Not(this Func<Boolean> innerFunction) {
[08]       return ()  => innerFunction();
[09]    }
[10]    ...
[11]   }

I was hoping that,  given a function like

[12] public static boolean AlwaysTrue() { return true };

that I could negate the function by writing

[13] AlwaysTrue.Not()

Unfortunately,  it doesn’t work that way:  the extension method can only be called on a delegate of type Func<Boolean>.  Although the compiler will “autobox” function references to delegates in many situations,  it doesn’t do it when you reference an extension method.  I could write

[14] (Func<Boolean> AlwaysTrue).Not()

but that’s not a pretty syntax.   At that point,  I tried another tack.

Try 2: Static Method

Next,  I defined a set of negation functions as static methods on a static class:

[15] public static class F {
[16]    public static Func<Boolean> Not(Func<Boolean> innerFunction) {
[17]       return () => !innerFunction();
[18]    }
[20]    public static Func<T1,Boolean> Not<T1>(
[21]       Func<T1,Boolean> innerFunction) {
[22]          return x =>!innerFunction(x);
[23]    }
[25]    public static Func<T1, T2,Boolean> Not<T1,T2>(
[26]       Func<T1, T2,Boolean> innerFunction) {
[27]          return (x,y) => !innerFunction(x,y);
[28]    }
[30]    public static Func<T1, T2, T3, Boolean> Not<T1,T2,T3>(
[31]       Func<T1, T2, T3, Boolean> innerFunction) {
[32]           return (x, y, z) => !innerFunction(x, y, z);
[33]    }
[35]    public static Func<T1, T2, T3, T4, Boolean> Not<T1, T2, T3, T4>(
[36]       Func<T1, T2, T3, T4,Boolean> innerFunction) {
[37]          return (x, y, z, a) => !innerFunction(x, y, z, a);
[38]    }
[39] }

Now I can write

[40] F.Not(AlwaysTrue)() // always == false


[41] Func<int,int,int,int,Boolean> testFunction = (a,b,c,d) => (a+b)>(c+d)
[42] F.Not(testFunction)(1,2,3,4)

which is sweet — the C# compiler now automatically autoboxes the argument to F.Not on line [40].  Note two details of how type inference works here:

  1. The compiler automatically infers the type parameters of F.Not() by looking at the arguments of the innerFunction.  If it didn’t do that,  you’d need to write

    which would be a lot less fun.

  2. On line [40],  note that the compiler derives the types of the parameters a,b,c, and d using the type declaration on the right hand side (RHS)

    you can’t write var on the RHS in this situation because that doesn’t give the compiler information about the parameters and return values of the lambda.

Although good things are happening behind the scenes,  there are also two bits of ugliness:

  1. I need to implement F.Not() 5 times to support functions with 0 to 4 parameters:  once I’ve done that,  however,  the compiler automatically resolves the overloading and picks the right version of the function.
  2. The generic Func<> and Action<> delegates support at most 4 parameters.  Although it’s certainly true that functions with a large number of parameters can be difficult to use and maintain (and should be discouraged),  this is a real limitation.

Extending IEnumerable<T>

One of the nice things about Linq is that you can extend it by adding new extension methods to IEnumerable.  Not everbody agrees,  but I’ve always liked the unless() satement in Perl,  which is equivalent to if(!test):

[42] unless(everything_is_ok()) {
[43]   abort_operation;
[44] }

A replacement for the Where(predicate) method that negates the predicate function would be convenient my debugging problem:

I built an Unless() extension method that combines Where() with F.Not():

[45] public static IEnumerable<T> Unless<T>(
[46]    this IEnumerable<T> input, Func<T, Boolean> fn) {
[47]       return input.Where(F.Not(fn));
[48]     }

Now I can write

[49] var list = new List<int>() { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
[50] var filtered = list.Unless(i => i > 3).ToList();

and get the result

[51] filtered = { 1,2,3 }


New features in C# 3.0 make it an expressive language for functional programming.  Although the syntax of C# isn’t quite as sweet as F# or Scala,  a programmer who works with the implicit typing and autoboxing rules of the compiler can create functions that act on functions that are easy to use — in this article we develop a set of functions that negate boolean functions and apply this to add a new restriction method to IEnumerable<T>.

kick it on

]]> 0
Putting Freebase in a Star Schema Wed, 25 Feb 2009 14:33:36 +0000 Paul Houle What’s Freebase?

Freebase is a open database of things that exist in the world:  things like people,  places,  songs and television shows.   As of the January 2009 dump,  Freebase contained about 241 million facts,  and it’s growing all the time.  You can browse it via the web and even edit it,  much like Wikipedia.  Freebase also has an API that lets programs add data and make queries using a language called MQL.  Freebase is complementary to DBpedia and other sources of information.  Although it takes a different approach to the semantic web than systems based on RDF standards,  it interoperates with them via  linked data.

The January 2009 Freebase dump is about 500 MB in size.  Inside a bzip-compressed files,  you’ll find something that’s similar in spirit to a Turtle RDF file,  but is in a simpler format and represents facts as a collection of four values rather than just three.

Your Own Personal Freebase

To start exploring and extracting from Freebase,  I wanted to load the database into a star schema in a mysql database — an architecture similar to some RDF stores,  such as ARC.  The project took about a week of time on a modern x86 server with 4 cores and 4 GB of RAM and resulted in a 18 GB collection of database files and indexes.

This is sufficient for my immediate purposes,  but future versions of Freebase promise to be much larger:  this article examines the means that could be used to improve performance and scalability using parallelism as well as improved data structures and algorithms.

I’m interested in using generic databases such as Freebase and Dbpedia as a data source for building web sites.  It’s possible to access generic databases through  APIs,  but there are advantages to having your own copy:  you don’t need to worry about API limits and network latency,  and you can ask questions that cover the entire universe of discourse.

Many RDF stores use variations of a format known as a Star Schema for representing RDF triples;  the Star Schema is commonly used in data warehousing application because it can efficiently represent repetitive data.   Freebase is similar to,  but not quite an RDF system.  Although there are multiple ways of thinking about Freebase,  the quarterly dump files provided by Metaweb are presented as quads:  groups of four related terms in tab-delimited terms.  To have a platform for exploring freebase,  I began a project of loading Freebase into a Star Schema in a relational database.

A Disclaimer

Timings reported in this article are approximate.  This work was done on a server that was doing other things; little effort was made to control sources of variation such as foreign workload,  software configuration and hardware characteristics.  I think it’s orders of magnitude that matter here:  with much larger data sets becoming available,  we need tools that can handle databases 10-100 times as big,  and quibbling about 20% here or there isn’t so important.  I’ve gotten similar results with the ARC triple store.  Some products do about an order of magnitude better:  the Virtuoso server can load DBpedia,  a larger database,   in about 16 to 22 hours on a 16 GB computer:  several papers on RDF store performance are available [1] [2] [3].  Although the system described in this paper isn’t quite an RDF store,  it’s performance is comprable to a relatively untuned RDF store.

It took about a week of calendar time to load the 241 million quads in the January 2009 Freebase into a Star Schema using a modern 4-core web server with 4GB of RAM;  this time could certainly be improved with microoptimizations,  but it’s in the same range that people are observing that it takes to load 10^8 triples into other RDF stores.  (One product is claimed to load DBPedia,  which contains about 100 million triples,  in about 16 hours with “heavy-duty hardware”.)   Data sets exceeding 10^9 triples are becoming rapidly available — these will soon exceed what can be handled with simple hardware and software and will require new techniques:  both the use of parallel computers and optimized data structures.

The Star Schema

In a star schema,  data is represented in separate fact and dimension tables,


all of the rows in the fact table (quad) contain integer keys — the values associated with the keys are defined in dimension tables (cN_value).  This efficiently compresses the data and indexes for the fact table,  particularly when the values are highly repetitive.

I loaded data into the following schema:

create table c1_value (
   id             integer primary key auto_increment,
   value          text,
) type=myisam;

... identical c2_value, c3_value and c4_value tables ...

create table quad (
   id             integer primary key auto_increment,
   c1             integer not null,
   c2             integer not null,
   c3             integer not null,
   c4             integer not null
) type=myisam;

Although I later created indexes on c1, c2, c3, and c4 in the quad table,  I left unnecessary indexes off of the tables during the loading process because it’s more efficient to create indexes after loading data in a table.  The keys on the value fields of the dimension tables are important,  because the loading process does frequent queries to see if values already exist in the dimension table.  The sequentially assigned id in the quad field isn’t necessary for many applications,  but it gives each a fact a unique identity and makes the system aware of the order of facts in the dump file.

The Loading Process

The loading script was written in PHP and used a naive method to build the index incrementally.  In pseudo code it looked something like this:

function insert_quad($q1,$q2,$q3,$q4) {

function get_dimension_key($index,$value) {
    if ($cached_value)
        return $cached_value;

    if ($row)
        return $row->id;
    return $conn->last_insert_id

Caching frequently used dimension values improves performance by a factor of five or so,  at least in the early stages of loading.  A simple cache management algorithm,  clearing the cache every 500,000 facts,  controls memory use.  Timing data shows that a larger cache or better replacement algorithm would make at most an increment improvement in performance.  (Unless a complete dimension table index can be held in RAM,  in which case all read queries can be eliminated.)

I performed two steps after the initial load:

  1. Created indexes on quad(c1), quad(c2), quad(c3) and quad(c4)
  2. Used myisam table compression to reduce database size and improve performance

Loading Performance

It took about 140 hours (nearly 6 days) to do the initial load.  Here’s a graph of facts loaded vs elapsed time:


The important thing Iabout this graph is that it’s convex upward:  the loading process slows down as the number of facts increases.  The first 50 quads are loaded at a rate of about 6 million per hour;  the last 50 are loaded at a rate of about 1 million per hour.  An explanation of the details of the curve would be complex,  but log N search performance of B-tree indexes and the ability of the database to answer queries out of the computer’s RAM cache would be significant.  Generically,  all databases will perform the same way,  becoming progressively slower as the size of the database increases:  you’ll eventually reach a database size where the time to load the database becomes unacceptable.

The process of constructing b-tree indexes on the mysql tables took most of a day.  On average it took about four hours to construct a b-tree index on one column of quad:

mysql> create index quad_c4 on quad(c4);
Query OK, 243098077 rows affected (3 hours 40 min 50.03 sec)
Records: 243098077  Duplicates: 0  Warnings: 0

It took about an hour to compress the tables and rebuild indexes,  at which point the data directory looks like:

-rw-r----- 1 mysql root        8588 Feb 22 18:42 c1_value.frm
-rw-r----- 1 mysql root   713598307 Feb 22 18:48 c1_value.MYD
-rw-r----- 1 mysql root   557990912 Feb 24 10:48 c1_value.MYI
-rw-r----- 1 mysql root        8588 Feb 22 18:56 c2_value.frm
-rw-r----- 1 mysql root      485254 Feb 22 18:46 c2_value.MYD
-rw-r----- 1 mysql root      961536 Feb 24 10:48 c2_value.MYI
-rw-r----- 1 mysql root        8588 Feb 22 18:56 c3_value.frm
-rw-r----- 1 mysql root   472636380 Feb 22 18:51 c3_value.MYD
-rw-r----- 1 mysql root   370497536 Feb 24 10:51 c3_value.MYI
-rw-r----- 1 mysql root        8588 Feb 22 18:56 c4_value.frm
-rw-r----- 1 mysql root  1365899624 Feb 22 18:44 c4_value.MYD
-rw-r----- 1 mysql root  1849223168 Feb 24 11:01 c4_value.MYI
-rw-r----- 1 mysql root          65 Feb 22 18:42 db.opt
-rw-rw---- 1 mysql mysql       8660 Feb 23 17:16 quad.frm
-rw-rw---- 1 mysql mysql 3378855902 Feb 23 20:08 quad.MYD
-rw-rw---- 1 mysql mysql 9927788544 Feb 24 11:42 quad.MYI

At this point it’s clear that the indexes are larger than the actual databases:  note that c2_value is much smaller than the other tables because it holds a relatively small number of predicate types:

mysql> select count(*) from c2_value;
| count(*) |
|    14771 |
1 row in set (0.04 sec)

mysql> select * from c2_value limit 10;
| id | value                                                 |
|  1 | /type/type/expected_by                                |
|  2 | reverse_of:/community/discussion_thread/topic         |
|  3 | reverse_of:/freebase/user_profile/watched_discussions |
|  4 | reverse_of:/freebase/type_hints/included_types        |
|  5 | /type/object/name                                     |
|  6 | /freebase/documented_object/tip                       |
|  7 | /type/type/default_property                           |
|  8 | /type/type/extends                                    |
|  9 | /type/type/domain                                     |
| 10 | /type/object/type                                     |
10 rows in set (0.00 sec)

The total size of the mysql tablespace comes to about 18GB,  anexpansion of about 40 times relative to the bzip2 compressed dump file.

Query Performance

After all of this trouble,  how does it perform?  Not too bad if we’re asking a simple question,  such as pulling up the facts associated with a particular object

mysql> select * from quad where c1=34493;
| id      | c1    | c2   | c3      | c4     |
| 2125876 | 34493 |   11 |      69 | 148106 |
| 2125877 | 34493 |   12 | 1821399 |      1 |
| 2125878 | 34493 |   13 | 1176303 | 148107 |
| 2125879 | 34493 | 1577 |      69 | 148108 |
| 2125880 | 34493 |   13 | 1176301 | 148109 |
| 2125881 | 34493 |   10 | 1713782 |      1 |
| 2125882 | 34493 |    5 | 1174826 | 148110 |
| 2125883 | 34493 | 1369 | 1826183 |      1 |
| 2125884 | 34493 | 1578 | 1826184 |      1 |
| 2125885 | 34493 |    5 |      66 | 148110 |
| 2125886 | 34493 | 1579 | 1826185 |      1 |
11 rows in set (0.05 sec)

Certain sorts of aggregate queries are reasonably efficient,  if you don’t need to do them too often:  we can look up the most common 20 predicates in about a minute:

   (select value from c2_value as v where as predicate,count(*)
   from quad as q
     group by c2
     order by count(*) desc
     limit 20;
| predicate                               | count(*) |
| /type/object/type                       | 27911090 |
| /type/type/instance                     | 27911090 |
| /type/object/key                        | 23540311 |
| /type/object/timestamp                  | 19462011 |
| /type/object/creator                    | 19462011 |
| /type/permission/controls               | 19462010 |
| /type/object/name                       | 14200072 |
| master:9202a8c04000641f800000000000012e |  5541319 |
| master:9202a8c04000641f800000000000012b |  4732113 |
| /music/release/track                    |  4260825 |
| reverse_of:/music/release/track         |  4260825 |
| /music/track/length                     |  4104120 |
| /music/album/track                      |  4056938 |
| /music/track/album                      |  4056938 |
| /common/document/source_uri             |  3411482 |
| /common/topic/article                   |  3369110 |
| reverse_of:/common/topic/article        |  3369110 |
| /type/content/blob_id                   |  1174046 |
| /type/content/media_type                |  1174044 |
| reverse_of:/type/content/media_type     |  1174044 |
20 rows in set (43.47 sec)

You’ve got to be careful how you write your queries:  the above query with the subselect is efficient,  but I found it took 5 hours to run when I joined c2_value with quad and grouped on value.  A person who wishes to do frequent aggregate queries would find it most efficient to create a materialized views of the aggregates.

Faster And Large

It’s obvious that the Jan 2009 Freebase is pretty big to handle with the techniques I’m using.  One thing I’m sure of is that that Freebase will be much bigger next quarter — I’m not going to do it the same way again.  What can I do to speed the process up?

Don’t Screw Up

This kind of process involves a number of lengthy steps.  Mistakes,  particularly if repeated,  can waste days or weeks.  Although services such as EC2 are a good way to provision servers to do this kind of work,  the use of automation and careful procedures is key to saving time and money.

Partition it

Remember how the loading rate of a data set decreases as the size of the set increase?  If I could split the data set into 5 partitions of 50 M quads each,  I could increase the loading rate by a factor of 3 or so.  If I can build those 5 partitions in parallel (which is trivial),  I can reduce wallclock time by a factor of 15.

Eliminate Random Access I/O

This loading process is slow because of the involvement of random access disk I/O.  All of Freebase canbe loaded into mysql with the following statement,


which took me about 40 minutes to run.   Processes that do a “full table scan” on the raw Freebase table with a grep or awk-type pipeline take about 20-30 minutes to complete.  Dimension tables can be built quickly if they can be indexed by a RAM  hasthable.   The process that builds the dimension table can emit a list of key values for the associated quads:  this output can be sequentially merged to produce the fact table.

Bottle It

Once a data source has been loaded into a database,  a physical copy of the database can be made and copied to another machine.  Copies can be made in the fraction of the time that it takes to construct the database.  A good example is the Amazon EC2 AMI that contains a preinstalled and preloaded Virtuoso database loaded with billions of triples from DBPedia,  MusicBrainz,  NeuroCommons and a number of other databases.  Although the process of creating the image is complex,  a new instance can be provisioned in 1.5 hours at the click of a button.

Compress Data Values

Unique object identifiers in freebase are coded in an inefficient ASCII representation:

mysql> select * from c1_value limit 10;
| id | value                                  |
|  1 | /guid/9202a8c04000641f800000000000003b |
|  2 | /guid/9202a8c04000641f80000000000000ba |
|  3 | /guid/9202a8c04000641f8000000000000528 |
|  4 | /guid/9202a8c04000641f8000000000000836 |
|  5 | /guid/9202a8c04000641f8000000000000df3 |
|  6 | /guid/9202a8c04000641f800000000000116f |
|  7 | /guid/9202a8c04000641f8000000000001207 |
|  8 | /guid/9202a8c04000641f80000000000015f0 |
|  9 | /guid/9202a8c04000641f80000000000017dc |
| 10 | /guid/9202a8c04000641f80000000000018a2 |
10 rows in set (0.00 sec)

These are 38 bytes apiece.  The hexadecimal part of the guid could be represented in 16 bytes in a binary format,  and it appears that about half of the guid is a constant prefix that could be further excised.

A similar efficiency can be gained in the construction of in-memory dimension tables: md5 or sha1 hashes could be used as proxies for values.

The freebase dump is littered with “reverse_of:” properties which are superfluous if the correct index structures exist to do forward and backward searches.

Parallelize it

Loading can be parallelized in many ways:  for instance,  the four dimension tables can be built in parallel.  Dimension tables can also be built by a sorting process that can be performed on a computer cluster using map/reduce techniques.  A cluster of computers can also store a knowledge base in RAM,  trading sequential disk I/O for communication costs.  Since the availability of data is going to grow faster than the speed of storage systems,  parallelism is going to become essential for handling large knowledge bases — an issue identified by Japanese AI workers in the early 1980′s.

Cube it?

Some queries  benefit from indexes built on combinations of tables,  such as

CREATE INDEX quad_c1_c2 ON quad(c1,c2);

there are 40 combinations of columns on which an index could be useful — however,  the cost in time and storage involved in creating those indexes would be excessively expensive.  If such indexes were indeed necessary, a Multidimensional database can create a cube index that is less expensive than a complete set of B-tree indexes.

Break it up into separate tables?

It might be anathema to many semweb enthusiasts,  but I think that Freebase (and parts of Freebase) could be efficiently mapped to conventional relational tables.  That’s because facts in Freebase are associated with types,  see,  for instance,  Composer from the Music Commons.  It seems reasonable to map types to relational tables and to create satellite tables to represent many-to-many relationships between types.  This scheme would automatically partition Freebase in a reasonable way and provide an efficient representation where many obvious questions (ex. “Find Female Composers Born In 1963 Who Are More Than 65 inches tall”) can be answered with a minimum number of joins.


Large knowledge bases are becoming available that cover large areas of human concern:  we’re finding many applications for them.  It’s possible to to handle databases such as Freebase and DBpedia on a single computer of moderate size,  however,  the size of generic databases and the hardware to store them on are going to grow larger than the ability of a singler computer to process them.  Fact stores that (i) use efficient data structures,  (ii) take advantage of parallelism,  and (iii) can be tuned to the requirements of particular applications,  are going to be essential for further progress in the Semantic Web.


]]> 8
Using Linq To Tell if the Elements of an IEnumerable Are Distinct Fri, 13 Feb 2009 21:15:38 +0000 Paul Houle The Problem

I’ve got an IEnumerable<T> that contains a list of values:  I want to know if all of the values in that field are distinct.  The function should be easy to use a LINQ extension method and,  for bonus points,  simply expressed in LINQ itself

One Solution

First,  define an extension method

01   public static class IEnumerableExtensions {
02        public static bool AllDistinct<T>(this IEnumerable<T> input) {
03            var count = input.Count();
04            return count == input.Distinct().Count();
05        }
06    }

When you want to test an IEnumerable<T>,  just write

07 var isAPotentialPrimaryKey=CandidateColumn.AllDistinct();


This solution is simple and probably scales as well as any solution in the worst case.  However,  it enumerates the IEnumerable<T> twice and does a full scan of the IEnumerable even if non-distinct elements are discovered early in the enumeration.  I could certainly make an implementation that aborts early using a Dictionary<T,bool> to store elements we’ve seen and a foreach loop,  but I wonder if anyone out there can think of a better pure-Linq solution.

]]> 1
Subverting XAML: How To Inherit From Silverlight User Controls Tue, 10 Feb 2009 20:31:21 +0000 Paul Houle The Problem

Many Silverlighters use XAML to design the visual appearance of their applications.  A UserControl defined with XAML is a DependencyObject that has a complex lifecycle:  there’s typically a .xaml file,  a .xaml.cs file,  and a .xaml.g.cs file that is generated by visual studio. The .xaml.g.cs file is generated by Visual Studio,  and ensures that objects defined in the XAML file correspond to fields in the object (so they are seen in intellisense and available to your c# code.)  The XAML file is re-read at runtime,  and drives a process that instantiates the actual objects defined in the XAML file — a program can compile just fine,  but fail during initialization if the XAML file is invalid or if you break any of the assumptions of the system.

XAML is a pretty neat system because it’s not tied to WPF or WPF/E.  It can be used to initialize any kind of object:  for instance,  it can be used to design workflows in asynchronous server applications based on Windows Workflow Foundation.

One problem with XAML,  however,  is that you cannot write controls that inherit from a UserControl that defined in XAML.  Visual Studio might compile the classes for you,  but they will fail to initialize at at runtime.  This is serious because it makes it impossible to create subclasses that let you make small changes to the appearance or behavior of a control.

Should We Just Give Up?

One approach is to give up on XAML.  Although Visual Studio encourages you to create UserControls with XAML,  there’s nothing to stop you from creating a new class file and writing something like

class MyUserControl:UserControl {
      public MyUserControl {
      var innerPanel=new StackPanel();
      innerPanel.Children.Add(new MyFirstVisualElement());
      innerPanel.Children.Add(new MySecondVisualElement());

UserControls defined like this have no (fundamental) problems with inheritance,  since you’ve got complete control of the initialization process.  If it were up to me,  I’d write a lot of controls like this,  but I work on a team with people who design controls in XAML,  so I needed a better solution.

Or Should We Just Cheat?

If we still want to use XAML to define the appearance of a control,  we’ve got another option.  We can move the XAML into a control that is outside the inheritance hierarchy:  the XAML control is then contained inside another control which doesn’t have restrictions as to how it is used:

The XamlPanelInnards.xaml.cs code-behind is almost completely empty:  it contains only the constructor created by Visual Studio’s XAML designer.  XamlPanel.cs contains a protected member called InnerControl which is of type XamlPanelInnardsInnerControl is initialized in the constructor of XamlPanel,  and is also assigned to the Content property of the XamlPanel,  to make it visible.  Everything that you’d ordinarily put in the code-behind XamlPanelInnards.xaml.cs goes into XamlPanel.cs,  which uses the InnerControl member to get access to the internal members defined by the XAML designer.

Step-by-Step Refactoring

Let’s imagine that we’ve got an existing UserControl implemented in XAML called the XamlPanel.  We’d like to subclass the XamlPanel so we can use it for multiple purposes.  We can do this by:

  1. Renaming XamlPanel to OldXamlPanel;  this renames both the *.xaml and *.xaml.cs files so we can have them to look at
  2. Use Visual Studio to create a new “Silverlight User Control” called XamlPanelInnards.  This will have both a *.xaml and *.xaml.xs file
  3. Copy the contents of OldXamlPanel.xaml to XamlPanelInnards.cs.  Edit the x:Class attribute of the <UserControl> element to reflect the new class name,  “XamlPanelInnards”
  4. Use Visual Studio to create a new class,  called XamlPanel.cs.  Do not create a XamlPanel.xaml.cs file!
  5. Copy the constructor,  methods,  fields and properties from the OldXamlPanel.xaml.cs file to the XamlPanel.cs file.
  6. Create a new private field in XamlPanel.cs like
    private XamlPanelInnards InnerControl;
  7. Now we modify the constructor,  so that it does something like this:
    public XamlPanel() {
       InnerControl = new XamlPanelInnards();
       Content = InnerControl;
       ... remainder of the constructor ...
  8. Most likely you’ll have compilation errors in the XamlPanel.cs file because there are a lot of references to public fields that are now in the InnerControl.  You need to track these down,  and replace code that looks like
    TitleTextBlock.Text="Some Title";


  9. Any event handler attachments done from the XAML file will fail to work (they’ll trigger an error when you load the application.)  You’ll need to convert
    <Button x:PressMe ... Click="PressMe_OnClick">

    in the XAML file to

    InnerControl.PressMe.Click += PressMe_OnClick

    in the constructor of XamlPanel.

  10. UI Elements that are defined in XAML are public,  so there’s a good chance that other classes might expect UI Elements inside the XamlPanel to be accessable.  You’ve got some choices:  (a) make the InnerControl public and point those references to the InnerControl (yuck!),  (b) selectively add properties to XamlPanel to let outside classes access the elements that they need to access,  or (c) rethink the encapsulation so that other class don’t need direct access to the members of XamlPanelInnards.
  11. There are a few special methods that are defined in DependencyObject and FrameworkElement that you’ll need to pay attention to.  For instance,  if your class uses FindName to look up elements dynamically,  you need to replace



So far this is a refactoring operation:  we’re left with a program that does what it already did,  but is organized differently.  Where can we go from here?

Extending The XamlPanel

At this point,  the XamlPanel is an ordinary UserControl class.  It’s initialization logic is self-sufficient,  so it can inherit (it doesn’t necessarily have to derive from UserControl) and be inherited from quite freely.

If we want to change the behavior of the XamlPanel,  for instance,  we could declare it abstract and leave certain methods (such as event handlers) abstract.  Alternately,  methods could be declared as virtual.

A number of methods exist to customize the appearance of XamlPanel:  since XamlPanel can see the objects inside XamlPanelInnards,  it can change colors,  image sources and text contents.  If you’re interested in adding additional graphical element to the Innards,  the innards can contain an empty StackPanel — children of XamlPanel can Add() something to the StackPanel in their constructors.

Note that you can still include a child XamlPanel inside a control defined in XAML by writing something like

<OURNAMESPACE:SecondXamlPanel x:Name="MyInstance">

you’re free to make XamlPanel and it’s children configurable via the DependencyObject mechanisms.  The one thing that you lose is public access to the graphical element inside the StackPanelInnards:  many developers would think that this increase in encapsulation is a good thing,  but it may involve a change in the way you do things.

Related articles

The community at has pointed me to a few other articles about XAML,  inheritance and configuring Custom XAML controls.

In a lavishly illustrated blog entry,  Amyo Kabir explains how to make a XAML-defined control inherit from a user-defined base class.  It’s the converse of what I’m doing in this article,  but it’s also a powerful technique:  I’m using it right now to implement a number of Steps in a Wizard.

Note that Silverlight has mechanisms for creating Custom Controls:  these are controls that use the DependencyObject mechanism to be configurable via XAML files that include them.  If you’re interested in customizing individual controls rather than customizing subclasses,  this is an option worth exploring.


XAML is a great system for configuring complex objects,  but you can’t inherit from a Silverlight class defined in XAML.  By using a containment relation instead of an inheritance relation,  we can push the XAML-configured class outside of our inheritance hierarchy,  allowing the container class to participate as we desire.  This way we can have both visual UI generation and the software engineering benefits of inheritance.

Reblog this post [with Zemanta]
]]> 0