Trademarks and open source software

Open source is a term used to cover permissive licenses for software. Generally speaking, if software is covered by an open source license, you have a right to the source code for that software, as well as the ability to modify that software and distribute your changes to others.

What are Trademarks?

“A trademark, trade mark, or trade-mark is a recognizable sign, design or expression which identifies products or services of a particular source from those of others.”

Definition of a Trademark

A trademark gives the trademark holder the right to control who and when a trademark can be used by others.

Trademarks and Open Source

Open source projects give away an awful lot. They have very little control over the source code for instance. Unlike a commercial product, they have no secret sauce that stops others from distributing their project as their own.

Projects only have one tool in their armory: one or more trademarks. Whilst anybody can take a project’s source code and distribute it under any name they care to use. They may not distribute their version of the project using the trademarked name.

Problems with Trademarks and Open Source

A trademark introduces a legal duty to defend the mark against people or organisations using the mark without permission. When lawyers start to become involved in open source software it may not sit too comfortably with the community.

Keir Thomas wrote a Ubuntu Pocket Guide and Reference book. Canonical, the people who maintain the Ubuntu Linux distribution, had a few issues with Keir’s use of the Ubuntu trademark on the site where he is marketing his book. That does sound a bit heavy handed on Canonical’s side. But, just because one party has been heavy handed doesn’t automatically mean that trademarks are all incompatible with open source.

The driving force behind open source is to ensure that users can customise their software and distribute their changes to whomever they like. It does not say that they should be able to distribute their changes under the umbrella of the original project. A free for all where anybody could distribute any project using whatever name they liked would seriously compromise open source quality and would make open source acceptance in business much harder.

If you want a copy of Ubuntu to install on your server, you need to know that you are installing a pucca copy of Ubuntu.

The biggest friction point for an open source project is the point at which the project becomes successful enough to sustain a large community. The project founders will likely wish to create a legal entity into which the assets of the project will be parked. The point at which the project becomes professionalised may disturb people who have been using the project name freely up until that point.

If you wrote a book about an open source project, you may have freely used the project name in your book. Prior to the project name being registered as a trademark this would not be a problem for either the author or the project admins. The project admins would probably be grateful for the attention and documentation. After the trademark is registered, the author may not realise that they then need permission to use the project name in subsequent editions.

People often get upset when a project name that was previously as open as the project source code, changes to being proprietary.

Update 1: Ethan Galstad on why Nagios requres a trademark.

Update 2: Fleshed matters out quite a lot. All a bit vague before.

Nagios responds to the ICINGA fork

Matt Asay over at The Open Road commented recently that forks are a sign of strength in open source. I’m sure he’s right, but they are not necessarily a sign of strength for the project being forked. The one positive thing is that it makes the community sit up and review the root cause of the fork.

As Andreas Ericsson says in his post The future of Nagios, recent events have demonstrated weaknesses in the structure of the Nagios project, specifically that Ethan Galstad is the only commiter of fixes and enhancements to Nagios. A single commiter is fine until the commiter doesn’t have sufficient time to work on the project as might be required to keep up with community submitted fixes and enhancements. Understandably, individual contributors are going to get frustrated that their patches and enhancements are not being incorporated into the project.

If nothing more comes of the ICINGA fork than a review of the Nagios structure, then the fork will have been worthwhile.

A perspective on open source network monitoring tools…

…by Grig Gheorghiu over on the Agile Testing blog: The sad state of open source monitoring tools.

“I wish there was a standard nomenclature for this stuff, as well as a standard way for these tools to inter-operate. As it is, you have to learn each tool and train your brain to ignore all the weirdness that it encounters.”

One of the problems with I.T. is the absence of a standard terminology. It would make things a lot easier if everybody used a standard set of terminology. Kinda hard to see how this can be imposed though. I guess over time a standard terminology will just evolve after the industry has matured a little more.

Open source network management in Google 2001 vs Google 2008

Google have released a fully searchable version of theirĀ first available index from 2001 to celebrate their 10th birthday. I thought it would be interesting to compare and contrast a search for “open source network management” using the 2001 index and the current index.

The first thing that springs out are all of the adverts in the 2008 version. My guestimate is that you’re going to be bidding well north of $5 per click for the top spot on there.

The second thing that pops out is the number of results: 1,330,000 versus 11,900,000 results. That’s a heck of a big growth! Getting on for ten times more pages matching the search between 2001 and 2008.

The search results themselves seem better in 2008 than way back in 2001. In the sense that the search does actually provide results to things that are open source network management tools with the inevitable wikipedia article thrown in for good measure.

Things sure are more competitive now. šŸ˜‰

In depth open source network management comparison

Jane Curry of the UK based Skills 1st network management training and consultancy company has written a rather good open source network management tool comparison. It is a large PDF file ~150 pages, so you have been warned!

Kudos to Jane for doing the comparison, it must have been a whole heap of work. Enjoy!


Distributed network monitoring introduction

A number of mid-level network monitoring products, like What’s Up Gold & Intellipool for instance, have recently implemented distributed monitoring features. Mid-level network monitoring products are now implementingĀ  distributed monitoring so it is affordable by a lot more companies.

Single Poller Monitoring

With regular network monitoring you have a single poller measuring network and server performance from a single location on your network.

Architecture of a central polling in a distributed network
Architecture of a central poller in a distributed network

Single poller monitoring works well when the network is small or only has a single site. Every request is made from a single location to each of the resources being measured.

Whilst single poller network monitoring is well suited to single site performance monitoring, it does not scale well on larger, multi-campus networks.

What is Distributed Network Monitoring

Distributed network monitoring involves multiple pollers distributed around your network measuring performance from multiple locations on your network

Architecture of Distributed Polling in a Distributed Network
Architecture of Distributed Polling in a Distributed Network

Multi-campus networks typically have WAN links interlinking the various sites. WAN links are usually much slower and more expensive than LAN links. By placing your network monitoring probe in a single central location you are inevitably going to send more traffic over your WAN links.

Distributed network monitors permit you to locate your probes locally to the resources being monitored with only the statistics being synchronised en-masse back to a central Network Operations Centre (NOC).

Advantages of Distributed Network Monitoring

  • Real user view of network performance — with single point network monitoring you see the network from a single perspective. With distributed network monitoring you see the network from a number of different views across your network;
  • Helps with network troubleshooting — distributed network monitoring gives you multiple performance profiles giving you the ability to detect outages and bottlenecks more easily
  • Reduce bandwidth requirements over WANs — a central poller will send requests over your precious WAN links. A distributed network monitor will usually be configured to send requests to local resources and appropriate global resources;
  • Single consolidated NOC view — rather than have a number of separate network monitoring systems situated inside each campus, distributed network monitors allow you the best of both worlds. Monitor resources locally but consolidate all stats into a single NOC for analysis and storage.

Disadvantages of Distributed Network Monitoring

  • More expensive and complex — distributed monitors are more expensive than single poller monitors, sometimes quite a lot more. You also need to find the hardware upon which to deploy the remote pollers and the time for installation and configuration;
  • Unless carefully designed you may end up using more WAN bandwidth than a central network monitor — if you are not selective of which services you monitor and from where you will find no savings in bandwidth usage with a distributed network monitor. Unless polling a resource is going to buy you some insight into your systems performance then monitoring it from a remote site seems like a waste of bandwidth.

Recommendations

  • Multiple single poller monitors, one for each remote office, may be more appropriate if each office runs its IT systems autonomously with few shared systems. Distributed network monitoring comes into its own when a single NOC view of the entire network is required. If you are happy with multiple autonomous point tools then a distributed system may be overkill;
  • Only monitor resources remotely that are genuinely used remotely. This will not only save you the bandwidth required to periodically test the resource but mean that you do not need to deprecate your carefully designed security policy by making a resource more publicly available than is entirely necessary. In addition, your monitoring effort won’t tell you anything meaningful anyway because none of your users use the resource remotely;
  • When remotely monitoring a resource, do not set up a separate comms channel for the monitoring system to use. For a performance monitor to be of any use it needs to use the same infrastructure that your users utilise. If you’re not careful the network monitor just ends up effectively monitoring itself.

I’ll be investigating your open source distributed network monitoring options soon. In the meantime, if you’ve got any feedback, please leave a comment!