Open source, open conflict?

I am currently messing around in the pits of .NET e-commerce. I thought it would be the last place I’d find open source inspired disharmony. But no, even here it is to be found. 😉

OK, a bit of background.

NOP Commerce is an ecommerce platform based on Microsoft’s open source ASP.NET platform. The project has been around for five or six years or so. Gets very good reviews too. Last year SmartStore.NET forked NOP. Nothing wrong with that, NOP is GPL’ed. That would be fine except for a clause in NOPs license which states that you must keep a link in your website footer to the project website unless you pay a small $60 waiver fee.

The problem, and the tension, comes from SmartStore.NET having removed the link requirement from their fork.

Whatever the legalities involved, and I am not legally qualified to comment either way, the SmartStore.NET fork doesn’t feel right. The NOP guys have put a ton of work into the project and they deserve better.

The sad thing is that there is a lot in SmartStore.NET to like. Wouldn’t a better option have been to merge the changes into NOP Commerce so that everybody wins?

Update: if you are after a .NET based e-commerce system then Virto Commerce is worth a look. Looks to be maturing qickly.

Top 5 Open Source Event Correlation Tools

Networks create lots of events. Sometimes thousands per minute.

Events can be SNMP traps generated by a server rebooting, syslog messages, Microsoft Windows event logs etc.

How do you know which events are important? The ones telling you something important?

That is where event correlation tools come in handy. You feed all of the events into the tool, as well as a description of the structure of your systems, and its job is to flag up the important ones.

  1. Simple Event Correlator (SEC) – SEC is a lightweight, platform independent event correlation tool written in Perl. Project registered with Sourceforge on 14th Dec 2001.
  2. RiverMuse – correlate events, alerts and alarms from multiple sources into a single pain of glass. Open core with a closed enterprise product cousin.
  3. Drools – a suite of tools written in Java including Drools Guvnor – a business rules manager, Drools Expert – rule engine, jBPM 5 – process / workflow, Drools Fusion – event processing / temporal reasoning and OptaPlanner – automated planning.
  4. OpenNMS – whilst not a dedicated event correlation tool, OpenNMS does contain an event correlation engine based upon the Drools engine mentioned above.
  5. Esper (and Nesper) – Esper is a Java based components (Nesper is a .NET based version of Esper) for complex event processing.

If you want a survey of event correlation techniques and tools, you could do a lot worse than read Andreas Müller’s master’s thesis titled Event Correlation Engine. It is a few years old, but is still pretty current.

Top 5 Open Source NetFlow Analyzers

NetFlow is a standard from Cisco for transferring of network analysis data across a network. The last thing you want to do with your routers and switches is give them the burden of analyzing network traffic, so Cisco came up with NetFlow so that you can offload the analysis to less CPU bound devices.

  1. NTop – a traffic analyser that runs on most UNIX variants and Microsoft Windows. In addition, ntop includes Cisco NetFlow and sFlow support. For an introduction to NTop, please see this introduction to NTop video.
  2. Flow-tools – a library and a collection of programs used to collect, send, process, and generate reports from NetFlow data.
  3. FlowScan – FlowScan processes IP flows recorded in cflowd format raw flow files and reports on what it finds. JKFlow  is a  XML-configurable Flowscan perl module for reading / analyzing your NetFlow data.
  4. EHNT – or Extreme Happy NetFlow Tool, turns streams of Netflow data into something useful and human-readable.
  5. BPFT – The BPFT daemon builds on top of libpcap and uses the BPF (Berkeley Packet Filter) mechanism for capturing IP traffic.

For an exhaustive list of open source and commercial NetFlow analyzers, you could do a lot worse than the FloMA: Pointers and Software collection.

Update July 2013: Ray Van Dolson has a link to NFSEN in the comments, you will also need NFDUMP.

Sometimes the open core functionality ceiling gets lower

First of all a little bit of background will make the post a little bit more understandable to non I.T. folks.

A bit of background

Zenoss is a network management software vendor with an open source core product, called Zenoss Core, and a closed source product called Zenoss Enterprise.

Zenoss is written in the Python programming language and uses the Zope web application framework.

Relstorage is a Zope add-on for saving data to a relational database. Relstorage allows Zenoss to use a relational database as its storage backend, with all of the scaling out benefits that entails.

A relational database does not need to run on the same server as Zope, you can run it on a completely different server. In fact, you can run the relational database on a cluster of machines giving substantial scalability benefits. With Zope’s native database format, running it on a different machine isn’t possible.

Zope’s native database format limits how scalable Zope can be, which in turn limits how scalable Zenoss can be.

A bit more background

Way back in 2010 myself and others suggested that an open core strategy would lead to some difficult decisions about which features go into the open source product and the closed enterprise product.

I suggested that a feature ceiling could be reached in the open source product and offered some modest proof that it existed.

The ceiling fell in… a bit

Phew, now we’ve got the introductions over, what the heck has all of the above got to do with open core software and lowering the functionality ceiling?

The following is part of a conversation on the dev IRC channel run by Zenoss.


“… A decision was made some time ago to move from standard zopedb to relstorage to improve performance.  Recently, a decision was made to remove the relstorage code from Zenoss Core.”

Bill Karpovitch Co-founder and CEO of Zenoss Inc.

“a mistake we made in the process was that product management had it slated for an enterprise feature but we began development in the core trunk.  for better or worse, our decision was to pull it back. we are continuing to look this as part of the plan forward.”

“all good points.  to be clear, the Core/Enterprise feature decisions are challenging and involve tough trade offs.”

As Bill Karpovitch says above, the relstorage feature wasn’t intended to go into Core, it was supposed to be an Enterprise only feature.

Unfortunately, it kind of snuck into Core accidentally and was removed. This upset the Zenoss community quite a lot as they believed that it would effectively create a fork between the Core and Enterprise products. A fork would make creating add-ons more difficult because potentially Core and Enterprise would need to be tested seperately. Supporting both would also be more difficult due to the core of the Enterprise product being different from Zenoss Core.

The relstorage feature was released in Zenoss Enterprise version 4.1.


BTW this post is in no way an attack on Zenoss, or their very fine business. Kudos to them for having this kind of discussion in public. The above is just a concrete example of the friction that is inevitable when you have an open source product and a closed source product.

The friction isn’t confined to open core companies either, closed source companies have exactly the same friction when they have tiered products too.

There is a happy ending, relstorage was eventually added to Zenoss Core version 4.2.

Network management’s “new wave” six years on

How time flies.

It has been six years since I wrote about Network management’s “new wave” and thought it would be interesting to go back and see what has happened. We are now at the outer envelope of the VC funding cycle so things should be sorting themselves out one way or another.

The “new wave” was Hyperic, Zenoss and Groundwork Open Source VC funded, open source network management companies.

Open source wasn’t new to the network management scene in 2007, there had been well known projects, like Nagios, MRTG and OpenNMS, around for a number of years prior to that.

What was different was combining open source licensing with big wads of venture capital. A total of $79.2M has been invested into the “new wave” over the last 9 years.

What has been the effect on network management software of the combinaton of open source licensing and oodles of venture capital?

Current state of play

My first impression is that not much has changed. Let’s dig a little deeper and see.

Hyperic was founded in 2004 and purchased by SpringSource in 2009 after having raised a total of $9.9M in two rounds of fund raising.

Zenoss was founded in 2005 and has raised a total of $40.8M in three rounds over the last eight years, the most recent being in September 2012 when Zenoss raised a further $25M.

Groundwork Open Source was founded in 2004 and has raised a total of $28.5M in four rounds, the most recent being in October 2009 when Groundwork raised $5M.

Are they still open

Looks like Groundwork isn’t that open. Groundwork Monitor Core product is restricted to 50 devices. The license doesn’t look at all open.

The open source moniker has gone. Hard to tell that, say Zenoss for instance, is actually open source by looking at their home page. If you were an alien just off the mother ship and only had the home page to look at, you wouldn’t know that the core product is open source.

Effects upon closed source competitors

I suggested that the “new wave” would have the effect of opening up the “big 4”. I can see no evidence of this at all. I also thought that the “big 4” would be good candidates to buy the “new wave” and that hasn’t happened either.

Effects upon consumers

The one big winner has been users. Open source network management software ten years ago could be hard work with no proper packaging and woeful documentation. Now, there are some really nice options that are much easier to work with. There are also large communities as well to offer support and guidance where necessary.


I find it hard to believe that too many people would consider the “new wave” experiment to be a major success story. I’m not saying it has failed, but venture capitalists invest money to win big, and the investment hasn’t won big. It probably didn’t help that the financial meltdown happened.

There are a number of winners, not least among the many users who have high quality software to use at minimal cost.

I doubt that venture capitalists will be rushing to find their cheque books to fund another round of open source network management companies.

Had the same wave of money been invested in closed source companies doing the same thing I’d bet that they would have been more successful in strictly money terms at least. If a user jumps on board your ecosystem for the sole reason they can get your core offering for free, is that user going to be worth the same in the long term as a customer who literally bought into the ecosystem? My expectation is that they are not.

I am not saying that open source businesses aren’t perfectly viable businesses. It just means they may not be as profitable as an equivalent closed source business. And money in the end of the day is what venture capitalists are interested in.

Automated install comes to open source .NET projects

One of the nice things about Linux is the ability to install apps (and dependencies) very easily using apt-get or similar. Windows users have been missing a similar tool for a long time. Never fear, the Scottish Alt.Net group have written Hornget, a tool for installing open source .NET projects.

Quite a few projects are supported, though most are of interest only to programmers. It would be nice to see a lot more user oriented tools like games and the like.

Update June 2011: There is in fact now a far better tool called Nuget, project gallery is here.

Update January 2013: Puppet has been ported over to Microsoft Windows. Great for installing dependencies in a virtual setup.

Update June 2013: Chocolatey brings apt-get type installer to Microsoft Windows. Seems to be gaining some traction too.

Musings upon the open core functionality ceiling

One of the things you’d expect from an active open source project is that the code base is likely to grow as more and more features are added.

In An exploration of open core licensing in network management I mentioned that one possible side effect of open core software is the creation of a functionality ceiling.

A functionality ceiling is a level of functionality beyond which the community edition product manager is unwilling to implement because of the fear that the enterprise product will be less attractive to potential customers.

That got me thinking, if a functionality ceiling does exist, how can I demonstrate it?

The graphs below are taken from the Ohloh open source project directory. The rather useful thing about Ohloh is, in addition to cataloguing open source projects, it also performs  extensive code analysis.

The two graphs below are taken from the Hyperic code analysis and the Zenoss code analysis pages on Ohloh.

Hyperic Code Analysis Graph
Hyperic Code Analysis Graph
Zenoss Code Analysis Graph
Zenoss Code Analysis Graph

Both of the graphs clearly show a plateau in the quantity of code committed to the respective community edition code repositories. There may be a number of explanations for the plateau, perhaps heavy re-factoring work clears the space required by new features. Though, realistically I doubt that re-factoring would be capable of continually reducing the size of the code base in order to make way for new code.

The plateau look suspiciously like evidence that open core software, at least in the network management world, tends towards a functional ceiling.

An exploration of open core licensing in network management

Open core refers to a business strategy employed by some commercial open source companies. The open core strategy is popular amongst companies within network management.

The open core strategy is largely defined by creating an open source community product that is freely given away, and another product, the enterprise edition, that is sold as a regular commercial software product.

The open core business model is useful to software vendors because it permits them to build a community surrounding the open product who will form the nucleus of the people who upgrade to the enterprise product.

The enterprise product is useful because it is packaged and sold in the same way as proprietary software. One of the major pluses for the open core strategy is that, having a paid for a product with all of the sales infrastructure that implies, fits in with many company’s purchasing processes. Tarus Balog, project lead of OpenNMS, posted about how his pure play open source business sometimes struggles with companies who expect to purchase software, rather than deploy the software for free and pay for training and implementation services.

Open core as the new shareware

The open core strategy has been likened to shareware, a software business model pioneered by Andrew Fluegelman, Jim “Button” Knopf, Bob Wallace et al in the late 1970s and long favoured by small Independent Software Vendors. Under the shareware model, the publisher distributes a limited version of the software that is either time limited or with key features disabled, in the hope that users will find the product useful enough to upgrade to the full version.

The shareware product is usually upgradeable to the full version by entering a product key supplied when the full version is purchased.

Whilst there is at least a grain of truth in the comparison there are some key differences between shareware and open core:

  • Key features are missing – open core software is useful in and of itself. An open core product that isn’t functional will not gain traction with a user community;
  • No community contributions – open core companies are keen to develop a community around their open core offering and hope/expect the community to contribute to the software eco-system surrounding the open core project;
  • Time limited – open core software is not time limited, you can use it for as long as you want.

The main similarity between open core and shareware business models is the desired end result on the part of the publisher. Both business models hope to up sell users to the full version of the product. The method used by both is also very similar, both business models withhold valuable functionality until the user upgrades.

Open core in network management

There has been quite a large influx of commercial open source companies into network management in the last few years, many with the largess of venture capital behind them. The most recent, RiverMuse, released the community edition of their event and fault management offering during 2009 and is following an open core strategy with the imminent release of their enterprise edition during early 2010.

In many ways network management is a perfect environment in which to exploit an open core strategy. Network management products are commonly structured around a central engine with add-ons integrating with third party networking hardware and servers.

The enterprise product is built around the core engine with a number of add-ons not provided in the community edition. The dual product strategy is most clearly taken by Zenoss who provide an open core engine but withhold many useful add-ons for important enterprise services like Microsoft Exchange and Active Directory. Whilst anybody could use the core engine to write their own add-on to provide the same functionality, many organisations find it more efficient to pay for a ready made and tested solution.

Vendor perspective

The pros and cons of the open core business model from the vendor’s perspective.

Open core licensing

The central part of an open core strategy is the dual licence. The community edition product is licensed under an open source licence, the enterprise product is usually licensed under a proprietary licence. Sometimes, when copyright or licensing issues intrude, the enterprise product also has an open source licence. Groundwork Monitor Enterprise Edition is a good example of an enterprise product having an open source licence. Dual licensing is only possible if you hold the copyright to all of the code, or have the agreement of the third party copyright holders to distribute under a restrictive licence. The same applies to any libraries distributed with the enterprise product.

If the enterprise product is licensed under an open source licence then there is always the danger that a customer may release the product in public, including the source code, meaning that potential customers no longer need to purchase the enterprise product in order to get hold of the value added features.

A fork in the road

A rival copy of an open source project based upon the same source code is called a fork.

As the core community product is freely available to anybody, there is a danger that a third party could create add-ons to the community edition and sell them in direct competition to the open core company. Whilst there is a danger of a competitor emerging to utilise the community product, there are some very good reasons why it won’t happen.

The competitor would be barred from using trademarks from the community edition in their product name or website. Consequently, it would be very difficult to promote the add-on to the desired audience. Trademark issues were one of the causes of the Icinga Nagios fork for instance.

In order to get around the trademark issue, the competitor would be forced to fork the community edition and release it under a new name. They could then sell an add-on product. Plainly the original community wouldn’t know anything about the fork and it would take a lot of marketing effort, in an already competitive market, for anybody to notice.

With the original community largely closed off, the competitor would have to start afresh and build a new community from scratch. Building a community takes time and money, external investment would be a very useful way to kick start the process. The competitor would not make a particularly attractive target for investment given that it doesn’t own any of the intellectual property of the fork.

In addition, the competitor would need to be absolutely certain that there are no source code or other artefacts which are being distributed as an exception to the community edition licence. There may also be clauses in the licence that have been inserted to guard against forking. The Zenoss licence contains just such a poisoning clause for instance.

Whilst forking is a danger to any commercial open core company, it does not appear to be a very pressing danger in practice.

Open core strategy

The open core strategy employed within network management companies encompasses quite a high degree of differences. There are companies like Hyperic who have pursued a pure open core strategy very successfully, controlling all of the software in both the core and enterprise products.

At the other end of the spectrum, Groundwork Open Source have executed more of an aggregation strategy, by bundling together well known open source projects together and making them into an enterprise network management platform with their own glue software.

The aggregation strategy could be considered to be more in keeping with the open source philosophy of software reuse. There are a number of advantages and disadvantages to the aggregation strategy. The main advantage being that by reusing a lot of best of breed components you get to market much faster than starting off from scratch. Though the vast array of licenses used by the various open source projects are likely to keep a good number of lawyers busy trying to sort out all of the requirements. Some open source licenses may well preclude use of the software within a commercial setting. Porting the software to new platforms is also likely to be difficult, you can only support the union of all of the platforms supported by the projects. Without the agreement of the project leads, you may have problems with trademark use, especially if you wish to market your software as being powered by the project in question. Many open source projects, Nagios being a very good example, do protect their names quite vigorously.

On the positive side, if you can leverage existing projects, you will have a number of communities ready and waiting to be up sold to your enterprise product.

Community perspective

An exploration of open core from the community perspective.

The Open Core Functionality Ceiling

One of the most difficult balancing acts for product managers of open core products is knowing which features should go into the community product and which should go into the enterprise product.

Does having an open core strategy mean that there are features that will never appear in the community product? Does the requirement to provide sufficient leverage to the sales VP provide an artificial ceiling for the functionality of the community product?

In a fully functioning open source eco system, the community would tend to close the gap between the community product and the enterprise product. Plainly an open core company is not going to be very comfortable with the value proposition of the enterprise product being undermined by the community.

Community contributions in an open core world

One of the problems with the open core strategy from the vendor perspective is that you need to be careful with how you handle community contributions. In the case where the company takes no third party contributions this isn’t going to be a problem, all of the engineers are on the company payroll.

If the company accepts third party contributions things become quite complex. In order to create a proprietary version of the software you either need to own the copyright to all of the software or have some kind of agreement to use the software in that way. A good example of such a third party contribution agreement is the Rivermuse contribution agreement.

The Rivermuse agreement must be signed each time a contribution is made. Whilst, from Rivermuse’s perspective, the agreement is absolutely necessary, I would think that the terms might make third party contributors think twice before agreeing to it.

Not only do Rivermuse have the right to sell your software without compensation, you have to assign the copyright of your work to Rivermuse. They also have the right to apply for patents based upon your work. If you submit your code, you could find yourself being sued for patent infringement by Rivermuse for discoveries that you made in the first place.

The OpenNMS Project Contributor Agreement, like the Rivermuse contribution agreement, also mandates that contributors assign copyright to the OpenNMS Group. The major difference is that the contributions are effectively owned by two parties, the contributor themselves and the OpenNMS Group, an arrangement known as dual ownership. The contributor also grants the OpenNMS Group a licence to use any patents contained within the contribution.

Open source etiquette

Many open source projects are written by people who gain no financial benefit from doing so. Open source software has been around for long enough that certain modes of behaviour have become the norm. One of the norms is the expectation that anybody wishing to incorporate an open source project into their own offering will ask the lead of that project for permission.

One of the dangers that commercial exploitation brings to the open source community is that the norms may be trampled upon. Is a company backed by outside investors likely to take the project owners views in mind when they have their own shareholders to concern themselves with. I’d like to be a fly on the wall in the board meeting when the VP of engineering explains to the board that a certain path cannot be followed because an open source project owner hasn’t agreed to their work being used, especially when the company would be perfectly within their legal rights to use it.

If a project lead sees their project being exploited by open source companies will it become a motivator to improve the software or will it become a disincentive?


Whilst there are many issues surrounding open core as a business strategy, it cannot be denied that an awful lot of high quality open source software has been written in pursuit of it.

When one surveys the open source network management landscape from before the open core invasion, it is hard to see how the user community has lost out.

Lessons learnt from the failure of TimeTag

I have a confession to make: I’ve developed a failed open source project! There I’ve said it, it’s now public knowledge and I can hang my head in shame… lead me to the village stocks so you can all throw rotting vegetables at me.

Happily, I don’t feel like that. Failure is, well, no big deal. Of course it does sting a little bit that I wasted an awful lot of time developing the software. What could I have done with the time had I not written the 11,184 lines of code Ohloh says I wrote? Well, I’ll never know, but…

After having a failure, any failure, it is quite healthy to take a look at it and try to figure out what mistakes were made and see if there are any lessons to be learnt.

The main mistake was to write TimeTag at all. Perhaps it would help to explain why I wrote TimeTag in the first place.

TimeTag was intended to kick start an open source network management / systems administration software ecosystem based around the PowerShell environment. (If you don’t know what PowerShell is, there is a good explanation here.)

If you want to build an ecosystem like the Linux network management / sys admin toolset, you are going to need the basic tools available to build upon. The Linux ecosystem has a few hub projects upon which most of the rest of the ecosystem build. It stands to reason that if you don’t have the hub tools, the ecosystem won’t take root.

So that’s where TimeTag came into the picture, it was my feeble attempt to build one of the hub tools for the PowerShell environment. The problem is that the Linux ecosystem didn’t develop in the way I envisaged the PowerShell environment developing.

RRDTool is the considerably more successful cousin of TimeTag. RRDTool was not written before the tools that depend upon it. Tobi Oetiker, the original author of RRDTool, also created the MRTG project. MRTG is, if not the first, then pretty close to the first, open source network monitoring application. The MRTG project originally had a very simple time series database (a mechanism for storing readings). As time went by, and MRTG was used on ever larger networks, the simple time series database didn’t scale well. RRDTool was written to provide a scalable time series database to cope the ever increasing demands placed upon MRTG.

So, rather than building one of the hub projects (a RRDTool equivalent) I should have started by building a MRTG equivalent for PowerShell instead. Then, if that had been successful, I should have written TimeTag. Instead of founding a project it would probably have been better to join the PolyMon project as a developer and then extracted the time series database into PowerShell.

Ain’t hindsight a wonderful thing?!

Update: the TimeTag repo is now hosted on Github. Feel free to fork.

Open source network management buzz comparison 2009

I did a comparison of the buzz for the leading open source network management tools in 2008 so I thought it would be interesting to do the same comparison for 2009 and see what’s changed.

As I did last year, I’ve compared the number of searches for the project name using Google Trends. As always, this post is not intended to be indicative of the usefulness of a particular tool to your requirements.

Open Source Network Management System Trends

Firstly a comparison of the major players in open source network management: Zenoss, Hyperic, Nagios, MRTG and OpenNMS. The most striking thing about the graph to me is the decline in searches for Nagios. From the middle of 2009 things have been declining quite steeply. MRTG has been declining though it just looks like a continuation of the decline evident for the last few years.

Open Source Network Management System Trend 2009

A Comparison of the Nagios Ecosystem

Whilst the above graph showed a reduction in the relative number of searches for Nagios, perhaps the Nagios ecosystem graph can explain it. Icinga, a Nagios fork, was created during 2009 and may be responsible for at least some of the decline. Icinga appears on the graph during late April and has a steady presence throughout the rest of 2009 save for a small period during the Christmas break.

A Comparison of the Nagios Ecosystem 2009

Open vs Closed Network Management Systems

Given that 2009 was a year of recession in many countries, perhaps it won’t surprise too many to see so many of  both the commercial open source and proprietary tools trending downwards. I suspect that 2009 was a tough year for winkling money out of IT budgets.

Open vs Closed Network Management Systems 2009


All in all an interesting year. Apart from the Icinga/Nagios episode it seems odd that none of the tools has made  significant progress during 2009. If open source tools were to make a move against their proprietary cousins you would assume it would be 2009 given the economic background. Budgets have been tight, so why haven’t open source tools made progress in these recessionary times?