Observations on 8 Issues of C# Weekly

C# Weekly Logo

At the end of 2013 I thought it would be interesting to create a C# focused weekly newsletter. I registered the domain and created a website and hooked it up to the excellent MailChimp email service. The site went live (or at least Google Analytics was added to the site) on the 18th December 2013.

And then I kind of forgot about it for a year or so.

In the mean time the website was getting subscribers. Not many but enough to suggest that there’s an appetite for a weekly C# focused newsletter.

The original website was a simple affair, just a standard landing page written using Bootstrap. Simple it may have been, but rather effective.

I figured that I might as well give the newsletter a go. Whilst the audience wasn’t very big (it still isn’t), it was big enough to give some idea whether the newsletter was working or not.

I’ve now curated eight issues, I thought it about time to take stock. The first issue was sent on Friday 27th February 2015 and has been sent every Friday since.

C# Weekly Subscriber Chart

C# Weekly Subscribers

The number of subscribers has grown over the last eight weeks, starting from a base of 194 subscribers there are now 231. A net increase of 37 in eight weeks. Of course there have been unsubscribes, but the new subscribers each week have always outnumbered them.

C# Weekly Website Visitors Chart

C# Weekly Website Visitors

The website traffic has also trended upwards over the last year, especially since the first issue was curated. Now there are a number of pages on the website, the site is receiving more traffic from the search engines. The number of visitors is not high, but at least progress is being made.

I started a Google Adwords campaign for the newsletter fairly early on. The campaign was reasonably successful while it lasted. Unfortunately, the website must have tripped up some kind of filter because the campaign was stopped. Pitty because the campaign did convert quite well. I did appeal the decision and I was informed that Google didn’t like the fact that the site doesn’t have a business model and that it links out to other websites a lot. Curiously, both charges could have been levelled at Google in the early days.

The weekly newsletter itself is a lot of fun to produce. During the week I look out for interesting and educational C# related content and tweet it via the @C# Weekly twitter account. I then condense all of the twitter content into a single email once a week on a Friday. Not all of the tweeted content makes it into the weekly issue. There is often overlap between content produced during the week so I get to choose what’s best.

The job of curating does take longer than I originally anticipated. I suspect that each issue takes the better part of a day to produce, which is probably twice the time I anticipated. Perhaps, when I’m more experienced, I will be able to reduce the time taken to produce the issue without reducing the quality.

Time will tell.

I can certainly recommend curating a newsletter from the perspective of learning about the topic. Even after only eight weeks I feel like I have learned quite a lot about C# that I wouldn’t otherwise have known, especially about the upcoming C# version 6.

If you want to learn about a topic, I can certainly recommend curating a newsletter as a good learning tool.

One of my better ideas over the last year was to move over to Curated, a newsletter service provider founded by the folks who run the very successful iOS Dev Weekly newsletter. The tools provided by Curated have helped a lot.

C# Weekly Content Interaction Stats Chart

C# Weekly Content Interaction Statistics

One of the best features of Curated is the statistics it provides for each issue. You can learn a lot from discovering which content subscribers are most interested in. You won’t be surprised to find that the low level C# content is the most popular.

The only minor problem I’ve found is that my dog eared old site actually converted subscribers at a higher rate than my current site. The version 1 site was converting at around 18% whereas the current site is converting at around 13%.

I have a few ideas around why the conversion rate has dropped. The current site is a bit drab and needs to be spruced up a little bit. In addition, the current site displays previous issues, including the most recent issue on the home page. I wonder if having content on the home page actually distracts people from subscribing.

All told curating a newsletter is fun. I can thoroughly recommend it. :)


The evils of the dashboard merry-go-round

French old-fashioned style carousel with stairs in La Rochelle. By Jebulon (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

What is the first thing you do when you get to the office? Check your email probably. Then what? I bet you go through the same routine of checking your dashboards to see what’s happened overnight.

That is exactly what I do every single morning at work.

And I keep checking those dashboards throughout the day. Sometimes I manage to get myself in a loop, continuing around and around the same set of dashboards.

I can pretty much guarantee that nothing will have changed significantly. Certainly not to the point where some action is necessary.

How many times do you check some dashboard and then action something? Me? Very rarely, if ever.

The Problem

I call this obsessive need to view all of my various dashboards my dashboard merry-go-round.

The merry-go-round is particularly a problem with real time stats. When Google Analytics implemented real time stats I guarantee that webmaster productivity plumeted the very next day.

But the merry-go-round is not confined to Google Analytics or even website statistics. Any kind of statistics will do.

It is a miracle I manage to do any work. One of the most seductive things about the constant dashboard merry-go-round is that it feels like work. There you are frantically pounding away quickly moving between your various dashboards. It even looks to everybody else like work. Your boss probably thinks you’re being really productive.

The problem is that there is no action. There is no end result.

You log into Google Analytics, go to the “Real-Time” section, you discover that there are five people browsing your site. On pages X, Y, and Z.

So what. You are not getting any actionable information from this.

You then go to the other three sites you’ve got in Google Analytics. Rince and reapeat.

Solutions

The obvious solution is to just stop. But if it was as simple as that, you’d already have stopped and so would I.

Notifications

One of the things you are trying to do by going into your dashboards is to see what’s changed. Has anything interesting happened? One way you can short circuit this is to configure the dashboard to tell you when something interesting has happened.

I use the Pingdom service to monitor company websites and I almost never go into the Pingdom dashboard. Why? Because I’ve configured the service to send me a text message to my mobile whenever a website goes down. If I don’t receive a text, then there’s no need to look at the dashboard.

Notifications do come with a pretty big caveat: you must trust your notifications. If I can’t rely on Pingdom to tell me when my sites are down, I may be tempted to go dashboard hunting.

Even if your systems are only capable of sending email alerts, you can still turn those into mobile friendly notifications and even phone call alerts using services like PagerDuty, OpsGenie and Victorops.

Instead of needing a mobile phone app for each service you run, you can merge all of your alerts into a single app with mobile notifications.

If you haven’t received a notification, then you don’t need to go anywhere near the dashboard. Every time you think about going there just remind yourself that, if there was a problem, you’d already know about it.

Time Management Techniques

Pomodoro Technique or Timeboxing or … lots of other similar techniques.

None of the time management techniques will of themselves cure your statsitus, it just moves it into your own “pomodori“, or, the time in between your productive tasks. If you want to sacrifice your own leisure time to stats, then go nuts. But at least you’re getting work done in the mean time.

The Writers’ Den

One technique writers use to be more productive is the writer’s den. The idea is you set aside a space with as few distractions as possible, including a PC with just the software you need to work and that isn’t connected to the internet.

Not a bad idea, but unfortunately, a lot of us can’t simply switch the internet off. We either work directly with internet connected applications or we need to use reference materials only available on the web.

As ever, there are software solutions that can restrict access to sites at pre-defined times of the day. You could permit yourself free access for the first half hour of work, and then again just before you leave. Leaving a large chunk of your day for productive work.

The drawback with systems that restrict access is that, as you are likely to be the person setting up the system, you’re also likely to be able to circumvent it quite easily too.

Conclusion

For me, dashboards and stats can be both a boon and a curse at the same time. They can hint at problems and actions you need to take, but they can also suck an awful lot of time out of your day.

I’ve found that a combination of time management and really good notifications are a great way to stop the dashboard merry-go-round and put stats into their rightful place. A tool to help you improve, not an end in themselves.

The problem with concentrating too much on statistics is that it is so seductive. It feels like you are working hard but, in the end, if there are no actions coming out of the constant stats watching, then it is all wasted effort.

If you have any hints and tips how you overcame your dashboard merry-go-round, please leave a comment. :)

[Picture is of a French old-fashioned style carousel with stairs in La Rochelle by Jebulon (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons]


Operating System Use by C# Devs

I started a C# newsletter about a month ago. Nothing fancy, just a weekly digest of news and links related to C# programming and the related libraries and tools. I sent out the fifth issue last Friday.

The website has Google Analytics (GA) installed, and one of the things that GA gives you is a breakdown of which operating systems your visitors are using. Now, it is early days, the website has only had 174 visitors in the last 31 days (1 – 31 March) so I don’t want to draw too many conclusions from such a small sample. But, I am a little surprised by the operating systems in use by the visitors.

Operating System Usage

A breakdown of the operating systems in use by visitors to the C# Weekly website.
Operating SystemNumber of VisitorsVisitor %
Windows12571.84%
Macintosh2112.07%
Android116.32%
Linux95.17%
iOS52.87%
(not set)10.57%
Chrome OS10.57%
Windows Phone10.57%

You’d kinda expect C# developers to be at the vanguard of Microsoft Windows usage. Certainly, if you go back a few years, you would expect Windows usage to be in the mid nineties percent. If usage levels shown in the above table are accurate, then that certainly isn’t the case any more.

Perhaps the fact that even C# devs are disappearing over to the likes of OS X  and Linux is the reason that C# and .NET as  a whole are being open sourced and ported to new operating systems. If C# devs are moving over to other operating systems for web browsing, then it may not be long until they start developing over there too.


New AVTECH Room Alert 3W Box Opening

IMG_0203

AVTECH have finally released the WiFi version of the AVTECH Room Alert 3E. The new model is called the AVTECH Room Alert 3W. The new model supports WiFi connectivity as well as all of the features you’ve appreciated on the wired ethernet model. The unit has a built in temperature sensor, with room for one more digital sensor, as well as room for one low voltage or dry contact sensor.

The new monitor will connect easily to the GoToMyDevices cloud service. Just configure the unit with your wireless network details, and the monitor will start uploading your sensor readings to your private data store in the cloud. You can of course switch off the cloud offering if you don’t want it. You can also use your existing on premise Device ManageR to centralise your data collection and alarming.


Things I Wished I’d Known Before Developing Xsensior Live

One of the best features of Xsensior Lite is the ability to view your sensor data and alerts anywhere you have web access. The website that provides the ability to see your sensor data is called Xsensior Live. Just log-in and your sensor data is displayed in pretty graphs with 24 hour highs / lows as well as any alerts that have been triggered. The feature was launched just over two years ago.

I thought I’d share some of the things I’ve learnt developing Xsensior Live, as well as maintaining and running it since launch. Building and running a real-time Internet of Things cloud does pose a number of challenges that aren’t present when developing sites that are more human interaction driven.

The System

The system consists of two parts. The first part is the website. The website provides the user interface for viewing sensor data, configuring alarm conditions and generating reports as well as a RESTful API for receiving sensor data.

Xsensior Live System Diagram

Diagram showing the architecture of the Xsensior Live system.

A worker service sits in the background waiting for messages to appear on the three MSMQ queues. The queues are an alarm input queue, email input queue and summary input queue. The alarm input queue receives messages from the API alerting the worker to new sensor input. The worker then checks the new sensor input to see if the sensor has entered an alarm state and fires off alerts as appropriate. The email input queue receives emails message from the UI, when for instance, a registration email is required. The summary input queue receives messages when the schedule configuration of the daily summary has changed. The worker then changes the schedule of the daily summary job as appropriate.

The worker service manages long running jobs by using an in-process version of the Quartz.net library. Tasks that require intermittent work, like the daily summaries or the alerts are scheduled as Quartz jobs. When the user modifies the timing for things like the daily summary schedule, or the frequency of alert repetitions, then the worker can adjust the job frequency as appropriate.

All pretty standard stuff.

But, real time data processing does present a number of challenges to any implementor of a cloud IoT service. Here are a few things I wish I’d known before hand.

Client / Server Communication

Put a lot of effort into the software connecting to your cloud service. You will save yourself a lot of pain if the client is able to handle down time on the cloud side without losing data.

Getting outside of the customer’s firewall can be quite an effort. The more enterprise your customer, the more support effort it will be. Expect an enhanced support effort even when using well supported protocols like HTTPS. If you are using something more exotic, expect an even bigger effort, especially if a new port has to be opened in the corporate firewall. That’s gonna be painful and the customer will blame you.

Make sure your customer can test the connection from the client to the server easily and provide great diagnostic information upon failure.

If your client is memory limited, then you probably won’t have any choice but to make the client not cache sensor readings. In which case, it is imperative that the system minimises downtime of the API part of your system.

Data

There will be a lot of data and it will accumulate really fast. It will rapidly get to a size that makes the database hard to handle.

Try to put a limit on the age of the data so that the quantity of data per customer is going to be limited.

Monitoring

Monitoring a website is simple enough, monitoring a Windows service is not quite so straight forward. A website has a public end point that can easily be tested using an online service like Pingdom. A Windows Service, by contrast, has no public endpoint and cannot be monitored quite so easily. You can install a probe onto your system for your favourite network monitor, or enable the Windows SNMP agent or the WMI agent. But, they are a lot more effort and open yet another port to be exploited. You’ll also need to set up your own monitor. All said, a lot more work.

If you must have a seperate worker then hosting with Azure websites may provide a relatively simple answer by using the WebJobs feature. You can have one or more WebJobs and the Azure system ensures they run when necessary. A failure of a WebJob is also handled by Azure. So, if one of your long running WebJobs fails, Azure will fire it straight back up without any effort from you. Another possible route might be to avoid having a seperate worker process at all. The Hangfire project looks like a very easy way to add long running background jobs to an ASP.NET project without requiring a seperate worker process.

I wish I’d known about Hangfire a couple of years ago :)

Logging

Log everything important.

Once you’ve put the effort into creating meaningful logs, you need to monitor them, and not just on your live system. If you are deploying to a development or stage environment and performing testing on there, then monitor those logs too; any errors / warnings found in your logs, should be logged and made available through a dashboard.

There are lots of tools like loggly for real time log analysis. loggly includes a nice dashboard and alarming module to alert you to various error conditions. A number of libraries exist as add-ons for well known logging libraries like log4net. You can implement loggly with just a small change to your website configuration and zero code changes.

Testing

Unit testing is great but don’t forget the integration tests. There are bugs that will fall between the cracks of a unit test, test the system as a whole too. You don’t need to test everything with integration tests, just the parts of the system likely to fail when talking to external components that are usually stubbed out of your unit tests, like the database, email server and SMS sender.

Deployment

Automate deployment of the whole system. You need to be able deploy new versions at the push of a single button. Tools are available to make automated deployment quite simple and straight forward. We’ve used Octopus Deploy for the last two years and find it invaluable. It is free for small deployments too, so no excuses. Other stacks like Ruby, Node and Java all have rich ecosystems of tools for automated deployment too.

Architect the site so that the machine to machine part is seperate from the user interface part of the site. The chances are that the machine to machine part will not change very much, whereas the user interface will change much more frequently.

You need to be able to upgrade the user interface part without taking down the machine to machine part.

Conclusion

If there was a single recommendation I would suggest you take away from this article, I’d say it would be to split the API from the user interface. I think that on its own would have made the biggest impact upon managing the site over the last two years.

Next would be the client side. The more resillent you can make the client, the easier upgrading the server is going to be. The last thing you want is for your customers to lose data.

Third would be to make sure your site is returning a pucka error HTTP status code. There is no point in displaying some kind of error message and then returning a 2XX status code. Your monitor in all probability is just looking for a status code in the 2XX range and that’s it. If you fail, then you must return an appropriate error code, something in the 5XX range. There is absolutely no point in having monitoring that does not pick up errors. Obvious I know, but it does happen. Writing a test for each controller to get one of your stubs to throw and then ensure a 5XX status code does not take very long.

Update 10 March 2015: Adds the status code should be in the 5XX range section in the conclusion.


New domain and last chance to subscribe via email

If you receive The Tech Teapot via email, this is your last chance to continue doing so. From now The Tech Teapot is moving over to use MailChimp instead of Google Feedburner for delivering email with the latest posts.

Feedburner has been withering on the vine since being taken over by Google. The reason that this is your final chance is because Google doesn’t manage the email list. Of the 500+ email subscribers, not a single one has been removed from the list due to email bouncing. I find it hard to believe that no subscriber has moved jobs in the last 8 years. Consequently, the Feedburner list is full of invalid, out of date email addresses. As a consequence, MailChimp will not import the Feedburner list.

Sorry about that, but fear not, you can sign-up here.

P.S. The more observant may have noticed the change of domain, I’ve owned the domain for a while and thought it about time the blog was moved onto its very own domain. Hope you like it.

P.P.S. Feed subscribers will also need to update the feed URL to http://techteapot.com/feed/

Update 5 March 2015: turns out that when you delete your feed in Feedburner you are given the option to redirect the Feedburner feed back to the original feed. So, if you’re subscribed to the feed through your feed reader, you will not need to change anything in order to continue receiving new updates :)


Top 3 Cable Tracing Technologies

Firstly, why would you need to trace network cabling? In a perfect world you wouldn’t need to, but even if a network begins life properly labelled, things have a habit of changing. Documentation and cable labelling don’t always keep up when changes are made.

Network Cable Mess

A jumble of network cables in a network cabinet.

 

When you need to re-arrange the cabling in your patch panel, can you be 100% certain that the label is correct? You can be reasonably certain if you installed and maintain the network yourself, but what if others are involved. Are they as fastidious as you are in keeping the network documentation up to date?

Before cable moves, it doesn’t do any harm to make sure the label is up to date. It is one thing to disconect a single phone or workstation, quite another to disconnect the server.

A number of different technologies exist for tracing network cabling. Each technology has their plus points and their down sides too. A brief explanation of each technology will now follow outlining when each should be applied.

Tone Tracing

Tone tracing is pretty much as old as copper cabling itself. Tone tracing is the grand daddy of all cable tracer technologies. The basic idea is that at one end of the cable you place an electrical signal onto the cable, using a tone generator, and then trace that signal, using a tone tracer, in order to understand where the cable is located.

It could not really be much simpler. Well unfortunately, it is simple in theory, and usually it really is that simple in practice, but there are some things in network cabling especially that manage to make things a little more complex.

The design characteristics of the network cable are working against you. The design of Unshielded Twisted Pair (UTP) cabling is meant to reduce the interference between the pairs of copper wire that make up the cable as a whole. All of CAT5, CAT5e, CAT6 cable types have four pairs of copper wire carefully twisted together to minimise interference. This presents a number of problems to anyone attempting to trace a category 5, 5e or 6 cable because you wish to maximise the signal on the cable in order to increase the strength of the signal you can detect.

The best way to minimise the effective damping effect of the cable twists is to place the signal on a single wire within the cable whilst the other pair is earthed. If a signal is placed onto both pairs in a cable the twists in the cable will work to dampen the signal, placing the signal onto a single wire will help to avoid the dampening effect.

Tone tracers have traditionally been analog. More recently digital or asymmetric tone tracers have arrived onto the market. Asymmetric toners have a number of advantages over more traditional analog tracers.

Tone Generator and Trace Diagram

A diagram showing how a tone generator and tone tracer works.

Tone tracing provides the only cable tracing technology that can be performed on a live cable. More modern asymmetric tone tracers operate at frequencies well above the level that even very modern cable like CAT7 operate. Consequently, the tone signal does not interfere with a network signal.

Locating exactly where the fault lies can be very useful in deciding whether a cable run needs replacing or repairing. A fault close to the end of the cable may be repairable. Conversely, a fault in the middle of the cable would, in all likelihood, require a replacement. A tone tracer can be used to locate a fault, though it is a laborious process tracing each wire until the break is located. A full featured cable tester or TDR tester would be much faster and consequently a much better use of your time.

Continuity Testing

A continuity tester provides the ideal way to locate and label your network cabling. Whilst a toner can only work one cable at a time, a continuity tester can locate up to 20 cables at a time.

Each continuity tester is supplied with at least one remote. The remote fixes onto one end of your cable and you place the tester itself on the other end. When the tester and the remote are connected to the same cable, the continuity tester will show the number of the connected remote. Additional remotes can usually be supplied for most continuity testers or are included in the kit form of the tester. The additional remotes are numbered sequentially allowing you to locate and label a batch of cables at a time. If you have a large number of cables to locate, this can be a real time saver and will save you a lot of leg work.

In addition, a continuity tester is also capable of simple cable testing ensuring that all of the pairs of wires have no breaks and are connected correctly. It must be stressed that low end continuity testers can be fooled into giving a positive result when the cable is incorrectly wired.

Most continuity testers have a tone generator built in, so, with the addition of a tone tracer, can be used for tone tracing as well. Using the built in tone generator on a continuity tester can save you from carrying around a tool, saving you space and weight in your kit bag.

Hub Blink

Most modern hubs and switches have activity lights for each port indicating the traffic level and status. A recent addition to many cable testers and outlet identifiers has been the ability to blink the lights on a port. This feature is called hub blink.

Of course, this feature is only useful if the cables you wish to locate are connected to a live network. Hub blink is completely useless if you are trying to locate bare wire cables or before the network infrastructure has been installed.

Conclusion

Which technology you find a best fit is largely dictated by a few factors like how many cables you need to track and whether the cables are live.

If you are tracking cables that are not terminated, your options are traditional tone generator and tracer or continuity tester. Ditto if the cable is live, your only option is to use an asymmetric toner.

Hub blinking is also fine so long as your switch is relatively small. If you’ve got a huge switch with hundreds of ports, you may well struggle to identify exactly which port is blinking.

If you have a lot of cables to find, then a continuity tester with multiple remotes will allow you to identify cables in batches, speeding up the process of identification and labelling.

 


My 2014 Reading Log

A list of all of the books I read in 2014 and logged in Good Reads, I read a few more technical books but didn’t log them for whatever reason.

January

Andrew Smith: Moondust: In Search Of The Men Who Fell To Earth (Non-Fiction)

February

March

William Golding: The Spire (Fiction)

Niall Ferguson: The Pity of War: Explaining World War 1 (Non-Fiction)

April

Simon Parkes: Live at the Brixton Academy: A rioutous life in the music business (Non-Fiction)

May

Christopher Priest: Inverted World (Fiction)

Philip K. Dick: VALIS (Fiction)

Lee Campbell: Introduction to Rx (Non-Fiction)

June
July

Neil Gaiman: Neverwhere (Fiction)

August
September

Fred Hoyle: A for Andromeda (Fiction)

October

Mark Ellen: Rock Stars Stole My Life! (Non-Fiction)

John Crowley: The Deep (Fiction)

Neil Gaiman: The Ocean at the End of the Lane (Fiction)

Philip K. Dick: A Maze of Death (Fiction)

November

Paul Ham: Hiroshima Nagasaki: The Real Story of the Atomic Bombings and their Aftermath (Non-Fiction)

December

A total of 14 books, 8 fiction (mostly science fiction and fantasy) and 6 non-fiction. Of the two autobiographies, I found Mark Ellen’s Rock Stars Stole My Life! to be very good, taking the reader through the rock landscape of the 1960s to the present day through the eyes of a music journalist.

The book I most enjoyed was Neil Gaiman’s The Ocean at the End of the Lane. Neil Gaiman was certainly my find of 2014. He manages to write quite original fantasy in a way that doesn’t make the book feel like fantasy.


New Aviosys IP Power 9858 Box Opening

A series of box opening photos of the new Aviosys IP Power 9858 4 port network power switch. This model will in due course replace the Aviosys IP Power 9258 series of power switches. The 9258 series is still available in the mean time though, so don’t worry.

The new model supports WiFi (802.11n-b/g and WPS for easy WiFi setup), auto reboot on ping failure, time of day scheduler and internal temperature sensor. Aviosys have also built apps for iOS and Android, so you can now manage your power switch on the move. Together with the 8 port Aviosys IP Power 9820 they provide very handy tools for remote power management of devices. Say goodbye to travelling to a remote site just to reboot a broadband router.